Библиотека сайта rus-linux.net
Squid uses a lot of memory for performance reasons. It takes much, much longer to read something from disk than it does to read directly from memory.
A small amount of metadata for each cached object is kept in memory. This is the StoreEntry data structure. For Squid-2 this is 56-bytes on "small" pointer architectures (Intel, Sparc, MIPS, etc) and 88-bytes on "large" pointer architectures (Alpha). In addition, There is a 16-byte cache key (MD5 checksum) associated with each StoreEntry. This means there are 72 or 104 bytes of metadata in memory for every object in your cache. A cache with 1,000,000 objects therefore requires 72 MB of memory for metadata only. In practice it requires much more than that.
Squid-1.1 also uses a lot of memory to store in-transit objects. This version stores incoming objects only in memory, until the transfer is complete. At that point it decides whether or not to store the object on disk. This means that when users download large files, your memory usage will increase significantly. The squid.conf parameter maximum_object_size determines how much memory an in-transit object can consume before we mark it as uncachable. When an object is marked uncachable, there is no need to keep all of the object in memory, so the memory is freed for the part of the object which has already been written to the client. In other words, lowering maximum_object_size also lowers Squid-1.1 memory usage.
Other uses of memory by Squid include:
- Disk buffers for reading and writing
- Network I/O buffers
- IP Cache contents
- FQDN Cache contents
- Netdb ICMP measurement database
- Per-request state information, including full request and reply headers
- Miscellaneous statistics collection.
- ``Hot objects'' which are kept entirely in memory.
One way is to simply look at ps output on your system. For BSD-ish systems, you probably want to use the -u option and look at the VSZ and RSS fields:
wessels ~ 236% ps -axuhm USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND squid 9631 4.6 26.4 141204 137852 ?? S 10:13PM 78:22.80 squid -NCYsFor SYSV-ish, you probably want to use the -l option. When interpreting the ps output, be sure to check your ps manual page. It may not be obvious if the reported numbers are kbytes, or pages (usually 4 kb).
A nicer way to check the memory usage is with a program called top:
last pid: 20128; load averages: 0.06, 0.12, 0.11 14:10:58 46 processes: 1 running, 45 sleeping CPU states: % user, % nice, % system, % interrupt, % idle Mem: 187M Active, 1884K Inact, 45M Wired, 268M Cache, 8351K Buf, 1296K Free Swap: 1024M Total, 256K Used, 1024M Free PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND 9631 squid 2 0 138M 135M select 78:45 3.93% 3.93% squid
Finally, you can ask the Squid process to report its own memory usage. This is available on the Cache Manager info page. Your output may vary depending upon your operating system and Squid version, but it looks similar to this:
Resource usage for squid: Maximum Resident Size: 137892 KB Memory usage for squid via mstats(): Total space in arena: 140144 KB Total free: 8153 KB 6%
If your RSS (Resident Set Size) value is much lower than your process size, then your cache performance is most likely suffering due to paging.
You might just have your cache_mem parameter set too high. See the `` What can I do to reduce Squid's memory usage?'' entry below.
When a process continually grows in size, without levelling off or slowing down, it often indicates a memory leak. A memory leak is when some chunk of memory is used, but not free'd when it is done being used.
Memory leaks are a real problem for programs (like Squid) which do all of their processing within a single process. Historically, Squid has had real memory leak problems. But as the software has matured, we believe almost all of Squid's memory leaks have been eliminated, and new ones are least easy to identify.
Memory leaks may also be present in your system's libraries, such as libc.a or even libmalloc.a. If you experience the ever-growing process size phenomenon, we suggest you first try an alternative malloc library.
The cache_mem parameter does NOT specify the maximum size of the process. It only specifies how much memory to use for caching ``hot'' (very popular) replies. Squid's actual memory usage is depends very strongly on your cache size (disk space) and your incoming request load. Reducing cache_mem will usually also reduce the process size, but not necessarily, and there are other ways to reduce Squid's memory usage (see below).
Note: This information is specific to Squid-1.1 versions
Look at your cachemgr.cgi
Information page. For example:
Memory usage for squid via mallinfo(): Total space in arena: 94687 KB Ordinary blocks: 32019 KB 210034 blks Small blocks: 44364 KB 569500 blks Holding blocks: 0 KB 5695 blks Free Small blocks: 6650 KB Free Ordinary blocks: 11652 KB Total in use: 76384 KB 81% Total free: 18302 KB 19% Meta Data: StoreEntry 246043 x 64 bytes = 15377 KB IPCacheEntry 971 x 88 bytes = 83 KB Hash link 2 x 24 bytes = 0 KB URL strings = 11422 KB Pool MemObject structures 514 x 144 bytes = 72 KB ( 70 free) Pool for Request structur 516 x 4380 bytes = 2207 KB ( 2121 free) Pool for in-memory object 6200 x 4096 bytes = 24800 KB ( 22888 free) Pool for disk I/O 242 x 8192 bytes = 1936 KB ( 1888 free) Miscellaneous = 2600 KB total Accounted = 58499 KB
First note that
mallinfo() reports 94M in ``arena.'' This
is pretty close to what top says (97M).
Of that 94M, 81% (76M) is actually being used at the moment. The
rest has been freed, or pre-allocated by
and not yet used.
Of the 76M in use, we can account for 58.5M (76%). There are some
malloc(3) for which we can't account.
Meta Data list gives the breakdown of where the
accounted memory has gone. 45% has gone to
and URL strings. Another 42% has gone to buffering hold objects
in VM while they are fetched and relayed to the clients (
for in-memory object).
The pool sizes are specified by squid.conf parameters.
In version 1.0, these pools are somewhat broken: we keep a stack
of unused pages instead of freeing the block. In the
for in-memory object, the unused stack size is 1/2 of
Pool for disk I/O is
hardcoded at 200. For
it's 1/8 of your system's
If you need to lower your process size, we recommend lowering the
max object sizes in the 'http', 'ftp' and 'gopher' config lines.
You may also want to lower
cache_mem to suit your
needs. But if you
make cache_mem too low, then some
objects may not get saved to disk during high-load periods. Newer
Squid versions allow you to set
memory_pools off to
disable the free memory pools.
We are not able to account for all memory that Squid uses. This would require excessive amounts of code to keep track of every last byte. We do our best to account for the major uses of memory.
Also, note that the malloc and free functions have their own overhead. Some additional memory is required to keep track of which chunks are in use, and which are free. Additionally, most operating systems do not allow processes to shrink in size. When a process gives up memory by calling free, the total process size does not shrink. So the process size really represents the maximum size your Squid process has reached.
Messages like "FATAL: xcalloc: Unable to allocate 4096 blocks of 1 bytes!" appear when Squid can't allocate more memory, and on most operating systems (inclusive BSD) there are only two possible reasons:
- The machine is out of swap
- The process' maximum data segment size has been reached
To tell if it is the second case, first rule out the first case and then monitor the size of the Squid process. If it dies at a certain size with plenty of swap left then the max data segment size is reached without no doubts.
The data segment size can be limited by two factors:
- Kernel imposed maximum, which no user can go above
- The size set with ulimit, which the user can control.
When squid starts it sets data and file ulimit's to the hard level. If you manually tune ulimit before starting Squid make sure that you set the hard limit and not only the soft limit (the default operation of ulimit is to only change the soft limit). root is allowed to raise the soft limit above the hard limit.
This command prints the hard limits:
This command sets the data size to unlimited:
ulimit -HSd unlimited
by Arjan de Vet
The default kernel limit on BSD/OS for datasize is 64MB (at least on 3.0 which I'm using).
Recompile a kernel with larger datasize settings:
maxusers 128 # Support for large inpcb hash tables, e.g. busy WEB servers. options INET_SERVER # support for large routing tables, e.g. gated with full Internet routing: options "KMEMSIZE=\(16*1024*1024\)" options "DFLDSIZ=\(128*1024*1024\)" options "DFLSSIZ=\(8*1024*1024\)" options "SOMAXCONN=128" options "MAXDSIZ=\(256*1024*1024\)"
See /usr/share/doc/bsdi/config.n for more info.
In /etc/login.conf I have this:
default:\ :path=/bin /usr/bin /usr/contrib/bin:\ :datasize-cur=256M:\ :openfiles-cur=1024:\ :openfiles-max=1024:\ :maxproc-cur=1024:\ :stacksize-cur=64M:\ :radius-challenge-styles=activ,crypto,skey,snk,token:\ :tc=auth-bsdi-defaults:\ :tc=auth-ftp-bsdi-defaults: # # Settings used by /etc/rc and root # This must be set properly for daemons started as root by inetd as well. # Be sure reset these values back to system defaults in the default class! # daemon:\ :path=/bin /usr/bin /sbin /usr/sbin:\ :widepasswords:\ :tc=default: # :datasize-cur=128M:\ # :openfiles-cur=256:\ # :maxproc-cur=256:\
This should give enough space for a 256MB squid process.
by Duane Wessels
The procedure is almost identical to that for BSD/OS above. Increase the open filedescriptor limit in /sys/conf/param.c:
int maxfiles = 4096; int maxfilesperproc = 1024;Increase the maximum and default data segment size in your kernel config file, e.g. /sys/conf/i386/CONFIG:
options "MAXDSIZ=(512*1024*1024)" options "DFLDSIZ=(128*1024*1024)"We also found it necessary to increase the number of mbuf clusters:
options "NMBCLUSTERS=10240"And, if you have more than 256 MB of physical memory, you probably have to disable BOUNCE_BUFFERS (whatever that is), so comment out this line:
#options BOUNCE_BUFFERS #include support for DMA bounce buffers
Also, update limits in /etc/login.conf:
# Settings used by /etc/rc # daemon:\ :coredumpsize=infinity:\ :datasize=infinity:\ :maxproc=256:\ :maxproc-cur@:\ :memoryuse-cur=64M:\ :memorylocked-cur=64M:\ :openfiles=4096:\ :openfiles-cur@:\ :stacksize=64M:\ :tc=default:And don't forget to run ``cap_mkdb /etc/login.conf'' after editing that file.
OSF, Digital Unix
by Ong Beng Hui
To increase the data size for Digital UNIX, edit the file
and add the entry...
proc: per-proc-data-size=1073741824Or, with csh, use the limit command, such as
> limit datasize 1024M
/etc/sysconfigtab requires a reboot, but the limit command
When Squid is reconfigured (SIGHUP) or the logs are rotated (SIGUSR1), some of the helper processes (dnsserver) must be killed and restarted. If your system does not have enough virtual memory, the Squid process may not be able to fork to start the new helper processes. The best way to fix this is to increase your virtual memory by adding swap space. Normally your system uses raw disk partitions for swap space, but most operating systems also support swapping on regular files (Digital Unix excepted). See your system manual pages for swap, swapon, and mkfile.
If your cache performance is suffering because of memory limitations, you might consider buying more memory. But if that is not an option, There are a number of things to try:
- Try a different malloc library.
- Reduce the cache_mem parameter in the config file. This controls how many ``hot'' objects are kept in memory. Reducing this parameter will not significantly affect performance, but you may recieve some warnings in cache.log if your cache is busy.
- Turn the memory_pools off in the config file. This causes Squid to give up unused memory by calling free() instead of holding on to the chunk for potential, future use.
- Reduce the cache_swap parameter in your config file. This will reduce the number of objects Squid keeps. Your overall hit ratio may go down a little, but your cache will perform significantly better.
- Reduce the maximum_object_size parameter (Squid-1.1 only). You won't be able to cache the larger objects, and your byte volume hit ratio may go down, but Squid will perform better overall.
- If you are using Squid-1.1.x, try the ``NOVM'' version.
Многие пользователи отмечают улучшение качества работы и утилизации памяти при использовании связки Squid с внешней билиотекой malloc. Мы рекомендуем GNU malloc или dlmalloc.
Использование GNU malloc
Чтобы Squid использовал GNU malloc проеделайте такие простые шаги:
- Загрузите исходники GNU malloc, которые доступны на зеркалах GNU FTP.
- Откомпилируйте GNU malloc
% gzip -dc malloc.tar.gz | tar xf - % cd malloc % vi Makefile # edit as needed % make
- Скопируйте libmalloc.a в директорию, где лежат библиотеки вашей системы и убедитесь,
что она называется libgnumalloc.a.
% su # cp malloc.a /usr/lib/libgnumalloc.a
- (Optional) Скопируйте GNU malloc.h в директорию, где размещаются заголовочные файлы вашей
системы и убедитесь, что он называется gnumalloc.h. В этом шаге нет необходимости,
но если вы сделаете это, то Squid сможет использовать функцию mstat(), чтобы выдавать
статистику по использованию памяти на информационной странице cachemgr.
# cp malloc.h /usr/include/gnumalloc.h
- Переконфигурируйте и пересоберите Squid
% make realclean % ./configure ... % make % make installЗаметьте, что в более поздних дистрибутивах 'realclean' был заменен на 'distclean'. После запуска конфигурационного скрипта, смотрите что он выводит на экран. Вы должны увидеть, что он обнаружил libgnumalloc.a и gnumalloc.h.
Это не самая быстрая, most space-conserving, самая портабельная, or most tunable из написанных когда-либо malloc. However it is among the fastest while also being among the most space-conserving, portable and tunable.
dlmalloc включена в дистрибутив Squid-2. Чтобы использовать эту библиотеку, просто укажите такую опцию скрипта configure:
% ./configure --enable-dlmalloc ...
Вперед Назад Содержание