[Lugstuff] shared memory kernel

Theodore Knab tjk at annapolislinux.org
Wed Sep 22 14:41:39 EDT 2010


Thanks for this information.

I can't say I really understand the Buddy Allocator.

But, I understand more than I did last week which is an improvement.

Do you know if there any tools that can be used to monitor memory
fragmentation ? 

Also, is it better to give all the 'resident' RAM to the 'shared'
pool? More specifically, in Linux it seems there is a kernel.shmmax,
which is the max shared memory segment size any one process can lock. 
Also, there is the kernel.shmall which is the max. shared memory used
on the system. If I am understanding the concepts right, giving the 
kernel the maximum shmall size will allow the system to run more
efficiently. This will allow all the programs running to share the
same memory pool as needed. Making the systems more happy.

Currently, for a 24GB database machine running, apache, and sendmail,
I changed the kernel.shmall value from a 16GB limit to a 24GB limit.

Will this create issues ? Is there any time the OS needs to use
'resident' RAM over 'shared' RAM ?


#source Informix defaults
#max segment size is shmmax
kernel.shmmax = 4398046511104 
#max shared memory to use.
#16GB RAM = 17179869184 bytes
#We get this number by dividing the RAM in bytes by the system page size
#page size is 4096 on most 64bit systems
kernel.shmall = 4194304 

#changed to
kernel.shmmax = 4398046511104 
#24GB to use all 24GB
kernel.shmall = 6291456


On 16/09/10 20:10 -0400, Thomas Gallen wrote:
> Paging hell refers to (I believe) the kernel swapping out pages and
> swapping in pages (page in/page out) rapidly due to programs allocating
> too much memory (again, I believe). The translation between virtual
> memory addresses and physical memory addresses allows Linux to give
> programs the illusion that there's much more memory available than
> there actually is. The intent is to allow the kernel to manage the
> allocation, deallocation and swapping of chunks of continuous pages
> transparently so that the user doesn't need to know that the seemingly
> linear memory chunk they have allocated may actually be spread out
> between multiple pages that are not necessarily linear in physical
> memory. The additional more obviously consequence of swapping pages
> into and out of swap is that, as it's bound to a physical hard disk,
> the performance impact is rather horrific.
> 
> On another note, unless I'm mistaken as to the meaning of the
> statistics, vmstat will allow you to monitor the page in and page out
> (though I think it calls them swap in (si) and swap out (so)) rates as
> currently reported by the kernel so if you want to try playing with the
> sysctl values while doing some load tests and keeping a close eye on
> vmstat then perhaps that may help you find the sweet spot you're
> looking for.
> 
> Disclaimer: I'm not a Linux kernel developer am still trying to wrap my
> head around all of the Linux memory management stuff myself. Some
> interesting reading:
> 
> http://scriptmatrix.net/cs2106/wiki/index.php?title=Buddy_Allocator
> http://linux-mm.org/PageAllocation
> 
> Thomas
> 
> On Thu, Sep 16, 2010 at 05:58:57PM -0400, Theodore Knab wrote:
> > Thanks Thomas,
> > 
> > I saw that article before, but I re-read it again.
> > 
> > It seems paging hell seems to be the result of setting too much shared
> > memory.
> > 
> > But, what is paging hell ?
> > 
> > 
> > On 16/09/10 09:31 -0400, Thomas Gallen wrote:
> > > I believe this will answer most of your questions:
> > > 
> > > http://www.pythian.com/news/245/the-mysterious-world-of-shmmax-and-shmall/
> > > 
> > > Thomas
> > > 
> > > On Sep 16, 2010, at 8:43 AM, Theodore Knab wrote:
> > > 
> > > > Hi I was wondering if anyone knows how to tune the three shared memory parameters
> > > > found in the /etc/sysctl.conf.  For those of you not familiar with
> > > > sysctl, it holds tunable kernel configurations. 
> > > > 
> > > > http://en.wikipedia.org/wiki/Sysctl
> > > > 
> > > > Normally, you have to increase the default settings for shared
> > > > memory use when messing with database applications and other applications that lock a lot
> > > > of memory.
> > > > 
> > > > The three I am focusing on are:
> > > > kernel.shmax
> > > > kernel.shmall
> > > > kernel.shmni
> > > > 
> > > > More specifically, I am wondering what happens if the tuning numbers for
> > > > shmax are set too high ? Does the kernel think it has more RAM that it
> > > > does ? The reverse seems to be true. For example, if the shared memory
> > > > number is too low applications requiring more shared memory will crash
> > > > or just not start.
> > > > 
> > > > For example, my home system runs Zone Minder for keeping track of
> > > > camera data. The default settings for shmax of 32Mb had to be 
> > > > increased for this application to work.
> > > > 
> > > > In contrast, I found a Informix machine that was tuned with a
> > > > shmmax that I thing was outrageously high.
> > > > 
> > > > For example, I found a system with 24GB had this parameter:
> > > > kernel.shmmax = 4398046511104 
> > > > 
> > > > According to the Google calculation this number translates to:
> > > > 4 398 046 511 104 bytes = 4096 gigabytes
> > > > 
> > > > This is the default setting that IBM Informix has on their website for
> > > > their Database application. So, I can understand how this may have been 
> > > > set. However, it seems not many people would have 4096GB or more on their machine.
> > > > 
> > > > http://publib.boulder.ibm.com/infocenter/idshelp/v115/index.jsp?topic=/com.ibm.relnotes.doc/ids_1150xc5/mach/ids_machine_notes_11.50.linppc64.html
> > > > 
> > > > I have since reset the setting to 75% of the total RAM in bytes, but I am still
> > > > wondering what happens to a Linux system. The machine in question locked
> > > > up twice due to all the memory being used. This is not something that
> > > > created a log entry. Also, the machine seemed to become unstable when it started swapping. 
> > > > It was just as if all the applications on the machine were trying to consuming more RAM 
> > > > that the system had.  For example, recently while running a database upgrade script on the
> > > > machine, I could not run simple shell commands. When I tried to log in,
> > > > I could type a few characters but only after a few minutes of waiting.
> > > > I tried to type 'ctrl-alt-del' but it appeared the program the shutdown
> > > > program could not be run in the time I had available to wait.  
> > > > It was not a hard crash where errors appear. 
> > > > 
> > > > Could this be a possible symptom of the kernel.shmmax being too high ?
> > > > 
> > > > Here are the current setting I am using for the 24GB machine.
> > > > kernel.shmmax = 18956485632 
> > > > kernel.shmall = 4194304 
> > > > kernel.shmmni = 4096
> > > > 
> > > > I used this script to calculate the max shared memory:
> > > > #!/bin/sh
> > > > #limits the kernel's shared memory to 1/2 actually physical memory
> > > > mem_bytes=`awk '/MemTotal:/ { printf "%0.f",$2 * 1024}' /proc/meminfo`
> > > > mem_max=`expr $mem_bytes / 2` #devide by 2
> > > > mem_max=`expr $mem_bytes / 4` #devide by 3
> > > > 
> > > > echo "mem max is $mem_max"
> > > > 
> > > > mem_max=`expr $mem_max + $mem_max + $mem_max` #gets a 75%
> > > > page_size=`getconf PAGE_SIZE`
> > > > shmall=`expr $mem_bytes / $page_size`
> > > > echo \# Maximum shared segment size in bytes
> > > > echo kernel.shmmax = $mem_max
> > > > echo \# Maximum number of shared memory segments in pages
> > > > echo kernel.shmall = $shmall
> > > > 
> > > > 
> > > > -- 
> > > > Ted Knab
> > > > 
> > > > _______________________________________________
> > > > Lugstuff mailing list
> > > > Lugstuff at annapolislinux.org
> > > > http://list.annapolislinux.org/cgi-bin/mailman/listinfo/lugstuff
> > > 
> > 
> > -- 
> > Ted Knab
> > Stevensville, MD USA
> > 
> > _______________________________________________
> > Lugstuff mailing list
> > Lugstuff at annapolislinux.org
> > http://list.annapolislinux.org/cgi-bin/mailman/listinfo/lugstuff

-- 
Ted Knab
Stevensville, MD USA




More information about the Lugstuff mailing list