Рейтинг@Mail.ru
[Войти] [Зарегистрироваться]

Наши друзья и партнеры

UnixForum


Lines Club

Ищем достойных соперников.




Книги по Linux (с отзывами читателей)

Библиотека сайта или "Мой Linux Documentation Project"

Вперед Назад Содержание

14. Системно-зависимые сверхестественности :)

14.1 Solaris

select()

select(3c) не поддерживает более чем 1024 файловых дескрипторов. Скрипт configure по умолчанию должен включить poll() для Solaris. poll() позволяет вам использовать больше файловых дескрипторов, 8192 или более.

Для старых версий Squid вы можете включить poll() вручную, изменив HAVE_POLL в файле include/autoconf.h или добавив -DUSE_POLL=1 к the DEFINES в src/Makefile.

malloc

В libmalloc.a есть утечка памяти. Squid-вый configure не использует -lmalloc на Solaris.

DNS lookups and nscd

от David J N Begley.

DNS lookups can be slow because of some mysterious thing called ncsd. You should edit /etc/nscd.conf and make it say:

        enable-cache            hosts           no

Apparently nscd serializes DNS queries thus slowing everything down when an application (such as Squid) hits the resolver hard. You may notice something similar if you run a log processor executing many DNS resolver queries - the resolver starts to slow.. right.. down.. . . .

According to Andres Kroonmaa, users of Solaris starting from version 2.6 and up should NOT completely disable nscd daemon. nscd should be running and caching passwd and group files, although it is suggested to disable hosts caching as it may interfere with DNS lookups.

Several library calls rely on available free FILE descriptors FD < 256. Systems running without nscd may fail on such calls if first 256 files are all in use.

Since solaris 2.6 Sun has changed the way some system calls work and is using nscd daemon as a implementor of them. To communicate to nscd Solaris is using undocumented door calls. Basically nscd is used to reduce memory usage of user-space system libraries that use passwd and group files. Before 2.6 Solaris cached full passwd file in library memory on the first use but as this was considered to use up too much ram on large multiuser systems Sun has decided to move implementation of these calls out of libraries and to a single dedicated daemon.

Запросы DNS и /etc/nsswitch.conf

от Jason Armistead.

The /etc/nsswitch.conf file determines the order of searches for lookups (amongst other things). You might only have it set up to allow NIS and HOSTS files to work. You definitely want the "hosts:" line to include the word dns, e.g.:

        hosts:      nis dns [NOTFOUND=return] files

DNS lookups and NIS

by Chris Tilbury.

Our site cache is running on a Solaris 2.6 machine. We use NIS to distribute authentication and local hosts information around and in common with our multiuser systems, we run a slave NIS server on it to help the response of NIS queries.

We were seeing very high name-ip lookup times (avg ~2sec) and ip->name lookup times (avg ~8 sec), although there didn't seem to be that much of a problem with response times for valid sites until the cache was being placed under high load. Then, performance went down the toilet.

After some time, and a bit of detective work, we found the problem. On Solaris 2.6, if you have a local NIS server running (ypserv) and you have NIS in your /etc/nsswitch.conf hosts entry, then check the flags it is being started with. The 2.6 ypstart script checks to see if there is a resolv.conf file present when it starts ypserv. If there is, then it starts it with the -d option.

This has the same effect as putting the YP_INTERDOMAIN key in the hosts table -- namely, that failed NIS host lookups are tried against the DNS by the NIS server.

This is a bad thing(tm)! If NIS itself tries to resolve names using the DNS, then the requests are serialised through the NIS server, creating a bottleneck (This is the same basic problem that is seen with nscd). Thus, one failing or slow lookup can, if you have NIS before DNS in the service switch file (which is the most common setup), hold up every other lookup taking place.

If you're running in this kind of setup, then you will want to make sure that

  1. ypserv doesn't start with the -d flag.
  2. you don't have the YP_INTERDOMAIN key in the hosts table (find the B=-b line in the yp Makefile and change it to B=)

We changed these here, and saw our average lookup times drop by up to an order of magnitude (~150msec for name-ip queries and ~1.5sec for ip-name queries, the latter still so high, I suspect, because more of these fail and timeout since they are not made so often and the entries are frequently non-existent anyway).

Tuning

Solaris 2.x - tuning your TCP/IP stack and more by Jens-S. Vckler

disk write error: (28) No space left on device

You might get this error even if your disk is not full, and is not out of inodes. Check your syslog logs (/var/adm/messages, normally) for messages like either of these:

        NOTICE: realloccg /proxy/cache: file system full
        NOTICE: alloc: /proxy/cache: file system full

In a nutshell, the UFS filesystem used by Solaris can't cope with the workload squid presents to it very well. The filesystem will end up becoming highly fragmented, until it reaches a point where there are insufficient free blocks left to create files with, and only fragments available. At this point, you'll get this error and squid will revise its idea of how much space is actually available to it. You can do a "fsck -n raw_device" (no need to unmount, this checks in read only mode) to look at the fragmentation level of the filesystem. It will probably be quite high (>15%).

Sun suggest two solutions to this problem. One costs money, the other is free but may result in a loss of performance (although Sun do claim it shouldn't, given the already highly random nature of squid disk access).

The first is to buy a copy of VxFS, the Veritas Filesystem. This is an extent-based filesystem and it's capable of having online defragmentation performed on mounted filesystems. This costs money, however (VxFS is not very cheap!)

The second is to change certain parameters of the UFS filesystem. Unmount your cache filesystems and use tunefs to change optimization to "space" and to reduce the "minfree" value to 3-5% (under Solaris 2.6 and higher, very large filesystems will almost certainly have a minfree of 2% already and you shouldn't increase this). You should be able to get fragmentation down to around 3% by doing this, with an accompanied increase in the amount of space available.

Thanks to Chris Tilbury.

Solaris X86 and IPFilter

by Jeff Madison

Important update regarding Squid running on Solaris x86. I have been working for several months to resolve what appeared to be a memory leak in squid when running on Solaris x86 regardless of the malloc that was used. I have made 2 discoveries that anyone running Squid on this platform may be interested in.

Number 1: There is not a memory leak in Squid even though after the system runs for some amount of time, this varies depending on the load the system is under, Top reports that there is very little memory free. True to the claims of the Sun engineer I spoke to this statistic from Top is incorrect. The odd thing is that you do begin to see performance suffer substantially as time goes on and the only way to correct the situation is to reboot the system. This leads me to discovery number 2.

Number 2: There is some type of resource problem, memory or other, with IPFilter on Solaris x86. I have not taken the time to investigate what the problem is because we no longer are using IPFilter. We have switched to a Alteon ACE 180 Gigabit switch which will do the trans-proxy for you. After moving the trans-proxy, redirection process out to the Alteon switch Squid has run for 3 days strait under a huge load with no problem what so ever. We currently have 2 boxes with 40 GB of cached objects on each box. This 40 GB was accumulated in the 3 days, from this you can see what type of load these boxes are under. Prior to this change we were never able to operate for more than 4 hours.

Because the problem appears to be with IPFilter I would guess that you would only run into this issue if you are trying to run Squid as a transparent proxy using IPFilter. That makes sense. If there is anyone with information that would indicate my finding are incorrect I am willing to investigate further.

Changing the directory lookup cache size

by Mike Batchelor

On Solaris, the kernel variable for the directory name lookup cache size is ncsize. In /etc/system, you might want to try

        set ncsize = 8192
or even higher. The kernel variable ufs_inode - which is the size of the inode cache itself - scales with ncsize in Solaris 2.5.1 and later. Previous versions of Solaris required both to be adjusted independently, but now, it is not recommended to adjust ufs_inode directly on 2.5.1 and later.

You can set ncsize quite high, but at some point - dependent on the application - a too-large ncsize will increase the latency of lookups.

Defaults are:

        Solaris 2.5.1 : (max_nprocs + 16 + maxusers) + 64
        Solaris 2.6/Solaris 7 : 4 * (max_nprocs + maxusers) + 320

The priority_paging algorithm

by Mike Batchelor

Another new tuneable (actually a toggle) in Solaris 2.5.1, 2.6 or Solaris 7 is the priority_paging algorithm. This is actually a complete rewrite of the virtual memory system on Solaris. It will page out application data last, and filesystem pages first, if you turn it on (set priority_paging = 1 in /etc/system). As you may know, the Solaris buffer cache grows to fill available pages, and under the old VM system, applications could get paged out to make way for the buffer cache, which can lead to swap thrashing and degraded application performance. The new priority_paging helps keep application and shared library pages in memory, preventing the buffer cache from paging them out, until memory gets REALLY short. Solaris 2.5.1 requires patch 103640-25 or higher and Solaris 2.6 requires 105181-10 or higher to get priority_paging. Solaris 7 needs no patch, but all versions have it turned off by default.

14.2 FreeBSD

T/TCP bugs

Нами обнаружено, что FreeBSD-2.2.2-RELEASE содержит ряд багов связанных с T/TCP. FreeBSD попытается использовать T/TCP, если вы включили ``TCP Extensions.'' Чтобы выключить T/TCP, используйте sysinstall для деактивации TCP Extensions или отредактируйте /etc/rc.conf, установив

        tcp_extensions="NO"             # Allow RFC1323 & RFC1544 extensions (or NO).
или добавьте в ваш файл /etc/rc следующее:
        sysctl -w net.inet.tcp.rfc1644=0

размер mbuf

Нами замечена странная вещь с некторыми межпроцессорными взаимодействиями Squid-а. Зачастую исходящие от процесса dnsserver данные НЕ могут быть прочитаны за один раз. При полной отладке, это выглядит так:

1998/04/02 15:18:48| comm_select: FD 46 ready for reading
1998/04/02 15:18:48| ipcache_dnsHandleRead: Result from DNS ID 2 (100 bytes)
1998/04/02 15:18:48| ipcache_dnsHandleRead: Incomplete reply
....other processing occurs...
1998/04/02 15:18:48| comm_select: FD 46 ready for reading
1998/04/02 15:18:48| ipcache_dnsHandleRead: Result from DNS ID 2 (9 bytes)
1998/04/02 15:18:48| ipcache_parsebuffer: parsing:
$name www.karup.com
$h_name www.karup.inter.net
$h_len 4
$ipcount 2
38.15.68.128
38.15.67.128
$ttl 2348
$end

Интересно, что чаще всего за первый раз удается прочитать только 100 байт. Когда требуется вызвать read() второй раз, это вносит дополнительную задержку в общую обработку запроса. На вашем кеше, запущенном на Digital Unix, среднее время ответа dnsserver составляет 0.01 секунды. Однако, на нашем кеше под FreeBSD средняя задерка составляет 0.10 секунды.

Вот простой патч, исправляющий эту ошибку:

===================================================================
RCS file: /home/ncvs/src/sys/kern/uipc_socket.c,v
retrieving revision 1.40
retrieving revision 1.41
diff -p -u -r1.40 -r1.41
--- src/sys/kern/uipc_socket.c  1998/05/15 20:11:30     1.40
+++ /home/ncvs/src/sys/kern/uipc_socket.c       1998/07/06 19:27:14     1.41
@@ -31,7 +31,7 @@
  * SUCH DAMAGE.
  *
  *     @(#)uipc_socket.c       8.3 (Berkeley) 4/15/94
- *     $Id: FAQ.sgml,v 1.106 2002/01/13 20:08:00 wessels Exp $
+ *     $Id: FAQ.sgml,v 1.106 2002/01/13 20:08:00 wessels Exp $
  */

 #include <sys/param.h>
@@ -491,6 +491,7 @@ restart:
                                mlen = MCLBYTES;
                                len = min(min(mlen, resid), space);
                        } else {
+                               atomic = 1;
 nopages:
                                len = min(min(mlen, resid), space);
                                /*

Другой способ, который может помочь, правда не исправит ошибки - увеличить размер mbuf в ядре. По умолчанию он равен 128 байтам. Параметр MSIZE объявлен в /usr/include/machine/param.h. Чтобы изменить его, мы добавили следующую строку в конфигурационный файл ядра:

        options         MSIZE="256"

Работа с NIS

В /var/yp/Makefile есть такая секция:

        # The following line encodes the YP_INTERDOMAIN key into the hosts.byname
        # and hosts.byaddr maps so that ypserv(8) will do DNS lookups to resolve
        # hosts not in the current domain. Commenting this line out will disable
        # the DNS lookups.
        B=-b
Вы можете закоментировать строку B=-b, после чего ypserv не будет производить поиска по DNS.

FreeBSD 3.3: Устройство lo0 (loop-back) не настраиваться при запуске

Squid требует, чтобы был настроен и активен интерфейс обратной петли. Если это не сделано, то вы получите сообщение о ошибке типа commBind.

от FreeBSD 3.3 Errata Notes:

Решение: Предполгагаем что вообщем вы знакомы с этой проблемой, откройте файл /etc/rc.conf и найдите переменную network_interfaces. В ее значениях измените слово auto на lo0, т.к. слово не активирует интерфейс обратной петли правильно, по уже определенным причинам. Т.к. ваши другие интерфейсы уже были описаны в переменной network_interfaces после инсталяции, то имеет смысл просто сделать s/auto/lo0/ в rc.conf.

Благодарность Robert Lister.

FreeBSD 3.x или более новая: Увеличение скорости записи на диск при помощи Softupdates

от Andre Albsmeier

FreeBSD 3.x и более новые поддерживают Softupdates. Это механизм увеличения скорости записи на диск подобно тому как это достигается за счет монтирования томов ufs в асинхронном режиме. Однако, Softupdates делает нечто подобное таким же качеством или даже лучшим, чем может быть достигнуто при использовании асихронного режима, но без потери безопасности в случае краха системы. Для более подробной информации, а также правах использования, см. /sys/contrib/softupdates/README и /sys/ufs/ffs/README.softupdate.

Чтобы система поддерживала softupdates, вам необходимо пересобрать ядро с директивой options SOFTUPDATES (см. LINT для комментариев этого примера). После перезагрузки с новым ядром вы можете включить softupdates на основной файловой системе, следующей командой:

        $ tunefs -n /mountpoint
Файловая система НЕ ДОЛЖНА быть подмонитрована в этот момент. После этого softupdates будет работать и файловая система может быть смонтирована обычным способом. Чтобы проверить, что код softupdates работает, просто используйте команду mount, ее вывод будет примерно таким:
        $ mount
        /dev/da2a on /usr/local/squid/cache (ufs, local, noatime, soft-updates, writes: sync 70 async 225)

Внутренние проблемы DNS в окружении jail

Некоторые пользователи сообщают о проблемах со Squid, который запущен в окружении jail. Сообщения от Squid в логе выглядят так:

2001/10/12 02:08:49| comm_udp_sendto: FD 4, 192.168.1.3, port 53: (22) Invalid argument
2001/10/12 02:08:49| idnsSendQuery: FD 4: sendto: (22) Invalid argument

Вы можете избавится от этой проблемы указав адрес сетевого интерфейса jail в конфигурационной опции 'udp_outgoing_addr' файла squid.conf.

14.3 OSF1/3.2

Если вы компилируете и libgnumalloc.a и Squid при помощи cc, то функция mstats() возвращает неверные значения. Однако, если вы компилируете libgnumalloc.a при помощи gcc, а Squid при помощи cc, то значения верны.

14.4 BSD/OS

gcc/yacc

некторые люди сообщают о трудностях при компиляции Squid на BSD/OS.

приоритет процесса

I've noticed that my Squid process seems to stick at a nice value of four, and clicks back to that even after I renice it to a higher priority. However, looking through the Squid source, I can't find any instance of a setpriority() call, or anything else that would seem to indicate Squid's adjusting its own priority.

от Bill Bogstad

BSD Unices traditionally have auto-niced non-root processes to 4 after they used alot (4 minutes???) of CPU time. My guess is that it's the BSD/OS not Squid that is doing this. I don't know offhand if there is a way to disable this on BSD/OS.

от Arjan de Vet

Вы можете избежать этого, если запустите Squid с nice-level -4 (или другим отрицаельным значением).

by Bert Driehuis

The autonice behavior is a leftover from the history of BSD as a university OS. It penalises CPU bound jobs by nicing them after using 600 CPU seconds. Adding

        sysctl -w kern.autonicetime=0
to /etc/rc.local will disable the behavior systemwide.

14.5 Linux

Cannot bind socket FD 5 to 127.0.0.1:0: (49) Can't assign requested address

Try a different version of Linux. We have received many reports of this ``bug'' from people running Linux 2.0.30. The bind(2) system call should NEVER give this error when binding to port 0.

FATAL: Don't run Squid as root, set 'cache_effective_user'!

Some users have reported that setting cache_effective_user to nobody under Linux does not work. However, it appears that using any cache_effective_user other than nobody will succeed. One solution is to create a user account for Squid and set cache_effective_user to that. Alternately you can change the UID for the nobody account from 65535 to 65534.

Another problem is that RedHat 5.0 Linux seems to have a broken setresuid() function. There are two ways to fix this. Before running configure:

        % setenv ac_cv_func_setresuid no
        % ./configure ...
        % make clean
        % make install
Or after running configure, manually edit include/autoconf.h and change the HAVE_SETRESUID line to:
        #define HAVE_SETRESUID 0

Also, some users report this error is due to a NIS configuration problem. By adding compat to the passwd and group lines of /etc/nsswitch.conf, the problem goes away. ( Ambrose Li).

Russ Mellon notes that these problems with cache_effective_user are fixed in version 2.2.x of the Linux kernel.

Large ACL lists make Squid slow

The regular expression library which comes with Linux is known to be very slow. Some people report it entirely fails to work after long periods of time.

To fix, use the GNUregex library included with the Squid source code. With Squid-2, use the --enable-gnuregex configure option.

gethostbyname() leaks memory in RedHat 6.0 with glibc 2.1.1.

by Radu Greab

The gethostbyname() function leaks memory in RedHat 6.0 with glibc 2.1.1. The quick fix is to delete nisplus service from hosts entry in /etc/nsswitch.conf. In my tests dnsserver memory use remained stable after I made the above change.

See RedHat bug id 3919.

assertion failed: StatHist.c:91: `statHistBin(H, max) == H->capacity - 1' on Alpha system.

by Jamie Raymond

Some early versions of Linux have a kernel bug that causes this. All that is needed is a recent kernel that doesn't have the mentioned bug.

tools.c:605: storage size of `rl' isn't known

This is a bug with some versions of glibc. The glibc headers incorrectly depended on the contents of some kernel headers. Everything broke down when the kernel folks rearranged a bit in the kernel-specific header files.

We think this glibc bug is present in versions 2.1.1 (or 2.1.0) and earlier. There are two solutions:

  1. Make sure /usr/include/linux and /usr/include/asm are from the kernel version glibc is build/configured for, not any other kernel version. Only compiling of loadable kernel modules outside of the kernel sources depends on having the current versions of these, and for such builds -I/usr/src/linux/include (or where ever the new kernel headers are located) can be used to resolve the matter.
  2. Upgrade glibc to 2.1.2 or later. This is always a good idea anyway, provided a prebuilt upgrade package exists for the Linux distribution used.. Note: Do not attempt to manually build and install glibc from source unless you know exactly what you are doing, as this can easily render the system unuseable.

Can't connect to some sites through Squid

When using Squid, some sites may give erorrs such as ``(111) Connection refused'' or ``(110) Connection timed out'' although these sites work fine without going through Squid.

Some versions of linux implement Explicit Congestion Notification (ECN) and this can cause some TCP connections to fail when contacting some sites with broken firewalls or broken TCP/IP implementations.

To work around such broken sites you can disable ECN with the following command:

echo 0 >/proc/sys/net/ipv4/tcp_ecn

Found this on the FreeBSD mailing list:

From: Robert Watson

As Bill Fumerola has indicated, and I thought I'd follow up in with a bit more detail, the behavior you're seeing is the result of a bug in the FreeBSD IPFW code. FreeBSD did a direct comparison of the TCP header flag field with an internal field in the IPFW rule description structure. Unfortunately, at some point, someone decided to overload the IPFW rule description structure field to add a flag representing "ESTABLISHED". They used a flag value that was previously unused by the TCP protocol (which doesn't make it safer, just less noticeable). Later, when that flag was allocated for ECN (Endpoint Congestion Notification) in TCP, and Linux began using ECN by default, the packets began to match ESTABLISHED rules regardless of the other TCP header flags. This bug was corrected on the RELENG_4 branch, and security advisory for the bug was released. This was, needless to say, a pretty serious bug, and good example of why you should be very careful to compare only the bits you really mean to, and should seperate packet state from protocol state in management structures, as well as make use of extensive testing to make sure rules actually have the effect you describe.

See also the thread on the NANOG mailing list, RFC3168 "The Addition of Explicit Congestion Notification (ECN) to IP, PROPOSED STANDARD" or Sally Floyd's page on ECN and problems related to it

14.6 HP-UX

StatHist.c:74: failed assertion `statHistBin(H, min) == 0'

This was a very mysterious and unexplainable bug with GCC on HP-UX. Certain functions, when specified as static, would cause math bugs. The compiler also failed to handle implied int-double conversions properly. These bugs should all be handled correctly in Squid version 2.2.

14.7 IRIX

dnsserver всегда возвращает 255.255.255.255

Эта проблема связанная с GCC (2.8.1 и более ранних) на Irix 6, которая приводит к тому, что всегда возвращается строка 255.255.255.255 для _ЛЮБОГО_ адреса при вызове inet_ntoa(). Если это происходит у вас, откомпилируйте Squid при помощи родного C-компилятора взамен GCC.

14.8 SCO-UNIX

от F.J. Bosscha

Чтобы заставить squid нормально запускаться на SCO-unix, вам необходимо сделать следующее:

Увеличьте параметр NOFILES, а также параметр NUMSP и откомпилируйте squid как это сделал я, хотя squid и выдаст в файл cache.log, что ему доступно 3000 файловых дескрипторов, проблема с сообщениями о том, что не хватает файловых дескриптров все еще остается. После того, как я увеличиваю значение NUMSP проблемы исчезают.

Единственное, что остается - кол-во tcp-соединений, поддерживаемых системой. По умолчанию это 256, но я увеличиваю это значение из-за кол-ва клиентов, которые у нас есть.


Вперед Назад Содержание


Эта статья еще не оценивалась
Вы сможете оценить статью и оставить комментарий, если войдете или зарегистрируетесь.
Только зарегистрированные пользователи могут оценивать и комментировать статьи.

Комментарии отсутствуют