Saturday, December 29, 2012
English humour on Slashdot
May be I’m too preconceived but it seems to me that commentators on foreign boards make me happy with witty posts more often then Russian anonymous.
This is a good example of pun that I like.
...
> Commenting to remove crap moderation! Pfff....Slashdot, why cant I change my mind!
>> Because it is a well known fact that forces recrut on /. to pilot drone. You can't change your mind after firing at something. Hence, it is part of the training.
>>> I thought it was because we Slashdotters are known to never make mistaks.
HaHa! That was nice!
Labels:
joke
Sunday, December 23, 2012
Why Linux is better then Windows: whoami
From this post I'm going to start a new series of posts in my blog. My intention is to spot some of non-obvios similarities and differences in this two OS. From a Linux user perspective of course.
Recently I've noticed that starting from Windows 7 this OS got the 'whoami' command which was taken from UNIX/Linux after about 20 or 30 years.
Not to mention 'Run as' option in Windows XP which was obviosly taken from the UNIX/Linux sudo example. At least a decade or two later I think.
Recently I've noticed that starting from Windows 7 this OS got the 'whoami' command which was taken from UNIX/Linux after about 20 or 30 years.
Not to mention 'Run as' option in Windows XP which was obviosly taken from the UNIX/Linux sudo example. At least a decade or two later I think.
Labels:
copycat,
Why Linux is ahead of Windows
Thursday, December 13, 2012
Zabbix: Timer process too busy (high CPU load)
The timer process recalculates every 30 seconds the following trigger functions:
nodata(), date(), dayofmonth(), dayofweek(), time(), now()
If you use Zabbix internal checks to monitor self-load and noticed that Timer process is 100% busy, it's good to know, how much triggers it processes.
nodata(), date(), dayofmonth(), dayofweek(), time(), now()
If you use Zabbix internal checks to monitor self-load and noticed that Timer process is 100% busy, it's good to know, how much triggers it processes.
pg cli # SELECT function, count(*) FROM functions GROUP BY function; function | count -----------+-------- count | 8621 prev | 936 max | 560 abschange | 1 iregexp | 97 str | 17961 change | 121942 last | 159969 diff | 33 nodata | 36800 avg | 17937 min | 2273 (12 rows) pg cli # SELECT function, parameter, count(*) FROM functions where function IN ('nodata', 'date', 'dayofmonth', 'dayofweek', 'time', 'now') GROUP BY function, parameter; function | parameter | count ----------+-----------+------- nodata | 1200 | 32 nodata | 300 | 27457 nodata | 600 | 836 nodata | 7200 | 132 nodata | 60 | 8247 nodata | 3600 | 96 (6 rows)
Labels:
zabbix
Saturday, November 24, 2012
Gimp 2.8.2 for Ubuntu 10.04
This is a standalone 64-bit build for nocona processor.
hxxp://xvi.academ.org:5081/gimp-2.8.2-64bit.tar.bz2
hxxp://xvi.academ.org:5081/gimp-2.8.2-32bit.tar.bz2
Download and unpack it to /opt/gimp-2.8, then use the following bash script to launch:
hxxp://xvi.academ.org:5081/gimp-2.8.2-64bit.tar.bz2
hxxp://xvi.academ.org:5081/gimp-2.8.2-32bit.tar.bz2
Download and unpack it to /opt/gimp-2.8, then use the following bash script to launch:
#!/bin/bash export PATH=/opt/gimp-2.8/bin:$PATH; export PKG_CONFIG_PATH=/opt/gimp-2.8/lib/pkgconfig; export LD_LIBRARY_PATH=/opt/gimp-2.8/lib; /opt/gimp-2.8/bin/gimp-2.8;The icon file, copy to /usr/share/applications/gimp-2.8.desktop
[Desktop Entry] Version=1.0 Type=Application Name=Gimp 2.8.2 GenericName=Image Editor Comment=Create images and edit photographs Exec=/opt/gimp-2.8/bin/gimp-2.8 %U TryExec=/opt/gimp-2.8/bin/gimp-2.8 Icon=gimp Terminal=false Categories=Graphics;2DGraphics;RasterGraphics;GTK; X-GNOME-Bugzilla-Bugzilla=GNOME X-GNOME-Bugzilla-Product=GIMP X-GNOME-Bugzilla-Component=General X-GNOME-Bugzilla-Version=2.8.2 X-GNOME-Bugzilla-OtherBinaries=gimp-2.8 StartupNotify=true MimeType=application/postscript;application/pdf;image/bmp;image/g3fax;image/gif;image/x-fits;image/pcx;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-psd;image/x-sgi;image/x-tga;image/x-xbitmap;image/x-xwindowdump;image/x-xcf;image/x-compressed-xcf;image/x-gimp-gbr;image/x-gimp-pat;image/x-gimp-gih;image/tiff;image/jpeg;image/x-psp;image/png;image/x-icon;image/x-xpixmap;application/pdf;image/x-wmf;image/x-xcursor;
Labels:
ubuntu
Saturday, November 10, 2012
Using arping in Zabbix
This is a quick recipe how to make Zabbix 2.x use arping instead of simple ICMP check with fping utility. Following method is more a proof of concept then a complete solution. Consider modify for your needs. Instead of messing with zabbix internals I chose to substitute fping utility (in zabbix_server.conf)
FpingLocation=/usr/sbin/fping.sh
The script parses input from Zabbix. Checks for some of arguments are hardcoded (-C3), thus if you change parameters of a simple check in Zabbix web interface it may failure.
FpingLocation=/usr/sbin/fping.sh
The script parses input from Zabbix. Checks for some of arguments are hardcoded (-C3), thus if you change parameters of a simple check in Zabbix web interface it may failure.
#!/bin/bash # /usr/sbin/fping.sh # export PATH=/root/bin:/sbin:/usr/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/default/bin/:/usr/local/bin/:/usr/local/sbin/: while IFS=$'\n' read -r LINE || [[ -n "$LINE" ]]; do IP="$LINE"; if [[ "$1" == "-q" && "$2" == "-C3" ]]; then DEV=`ip route get $IP | cut -d' ' -f3|tr -d '\n'` # local if [[ $DEV == "local0" ]]; then ARPING=`/sbin/arping -I local0 -c 3 $IP 2>&1|grep Unicast`; if [[ $ARPING == "" ]]; then echo "$IP : - - -"; else echo "$IP : 1.10 1.10 1.10"; fi else # not local /usr/sbin/fping -q -C3 $IP 2>&1; fi else # something else /usr/sbin/fping $1 $2 $3 $4 $IP 2>&1; fi
Labels:
zabbix
Sunday, October 28, 2012
Новосибирская область станет пилотным регионом России для отработки сдачи ЕГЭ по информатике в электронном виде.
Новосибирская область станет пилотным регионом России для отработки сдачи ЕГЭ по информатике в электронном виде, - такая новость появилась в местных новостях. Действительно, школа, которой я помогаю с администрированием Linux, готовится к проведению такого мероприятия и я хочу чуть-чуть написать, что из себя представляет ПО, которое прислали из "центра".
По функционирует по следующей схеме. Есть серверная часть, которая позволяет проходить все стадии экзамена: распечатку экзаменационных бланков, регистрацию и авторизацию учащихся по индивидуальным 12-значным числовым кодам, раздачу заданий, сбор результатов и проч. Клиентской частью является браузер.
Перво-наперво ПО, о котором пойдет речь, по условию должно работать в школах независимо от того, внедряли там Linux или нет. К счастью писатели КЭГЕ ("Компьютерный еге") пошли не самым плохим путем. Никаких *exe, обернутых wine в Wine. Используется java-application вместе с Tomcat. Так что весь экзаменационных инсталлер тянет на 150 Мб. Зато есть и для Win, и для x86 Linux, и для x86_64 Linux. Короче, все цивильно. В сопровождении идет объемистая документация, которая хоть и изобилует этими ужасными русскими сокращениями типа АРМ (автоматизированное рабочее место), но в остальном составлена адекватно. Примеры:
Google Chrome - Браузер, разрабатываемый компанией Google на основе свободного
браузера Chromium.
Linux - Общее название Unix-подобных операционных систем на основе ядра Linux, библиотек и системных программ, разработанных в рамках проекта GNU.
Microsoft Internet Explorer - Серия браузеров, разрабатываемая корпорацией Microsoft. Входит в комплект операционных систем семейства Windows.
Пример из списка необходимого ПО:
FreePascal (не ниже 2.6.0) (сервер медленный, коннект не с первого раза)
Какая трогательная забота об учителях.)) Ссылки на линукс-версии исключительно на tar архивы.
Видите? Все не так плохо, как можно было бы ожидать. Напоминаю, что в школах учителя информатики могут запросто попытать подключить цифровой проектор к ноутбуку посредством инфра-красного порта. Инсталятор ПО для КЭГЕ составлен как раз для такого уровня компьютерной грамотности. Вы запускаете либо start.bat, либо start.sh и все: веб-сервер готов к работе. Он работает с тремя уровнями доступа. Только с локалхоста доступен административный интерфейс: http://localhost:8888/admin (Привет, админы локалхостов!)
Можно или нет перенастроить ACL я сходу не понял. Зашли с локалхоста и нас тут же обломали: браузер должен быть либо > Firefox 13, либо > Chrome 19, либо >IE какой-то там. В клонах RHEL по умолчанию идет Firefox 10 как LTS. Ладно, эти грабли мы знаем, как обойти: лезем в about:config, создаем general.useragent.override и т.д. проходим первый уровень, идем дальше.
При заходе на /admin открывается скрытый iframe и в нем проверяют коннект до ресурсов всемирной паутины, а именно до http://vk.com/images/hat.gif и в случае успеха вываливают большое красное окно: "А-та-та! Нельзя проводить экзамен на такой незащищенной системе!" Пофигу, что это [клон] RHEL 6, вдруг утечка? Ладно, локальный proxy уже есть, он в контакт не пускает, включаем в браузере. Второй уровень пройден.
Все это ПО периодически требует всевозможные ключи активации и подтверждения подлинности участников экзамена. Что само по себе похвально. Пароли и ключи для расшифровки заданий присылаются из центра. Есть также забавный термин "рассадка". Поставляется бинарный (зашифрованный?) файл с данными об учащихся, пароли доступа и проч. Прочие уровни этой занимательной игрушки уже проходил не я, но как я понял со слов человека из "центра" мы дошли дальше все, т.к. некоторые срубились уже на запуске серверной части.:) В итоге после полноценного сетапинга, ПО для экзамена знает всех поименно, рисует картинки с активированными компьютерами учеников и вобще все по-взрослому. Дизайн среднестатистический, строгий.
https://docs.google.com/open?id=0B1Mh3B-dAA04eUdONDNWVmQ2bUk
По функционирует по следующей схеме. Есть серверная часть, которая позволяет проходить все стадии экзамена: распечатку экзаменационных бланков, регистрацию и авторизацию учащихся по индивидуальным 12-значным числовым кодам, раздачу заданий, сбор результатов и проч. Клиентской частью является браузер.
Перво-наперво ПО, о котором пойдет речь, по условию должно работать в школах независимо от того, внедряли там Linux или нет. К счастью писатели КЭГЕ ("Компьютерный еге") пошли не самым плохим путем. Никаких *exe, обернутых wine в Wine. Используется java-application вместе с Tomcat. Так что весь экзаменационных инсталлер тянет на 150 Мб. Зато есть и для Win, и для x86 Linux, и для x86_64 Linux. Короче, все цивильно. В сопровождении идет объемистая документация, которая хоть и изобилует этими ужасными русскими сокращениями типа АРМ (автоматизированное рабочее место), но в остальном составлена адекватно. Примеры:
Google Chrome - Браузер, разрабатываемый компанией Google на основе свободного
браузера Chromium.
Linux - Общее название Unix-подобных операционных систем на основе ядра Linux, библиотек и системных программ, разработанных в рамках проекта GNU.
Microsoft Internet Explorer - Серия браузеров, разрабатываемая корпорацией Microsoft. Входит в комплект операционных систем семейства Windows.
Пример из списка необходимого ПО:
FreePascal (не ниже 2.6.0) (сервер медленный, коннект не с первого раза)
Какая трогательная забота об учителях.)) Ссылки на линукс-версии исключительно на tar архивы.
Видите? Все не так плохо, как можно было бы ожидать. Напоминаю, что в школах учителя информатики могут запросто попытать подключить цифровой проектор к ноутбуку посредством инфра-красного порта. Инсталятор ПО для КЭГЕ составлен как раз для такого уровня компьютерной грамотности. Вы запускаете либо start.bat, либо start.sh и все: веб-сервер готов к работе. Он работает с тремя уровнями доступа. Только с локалхоста доступен административный интерфейс: http://localhost:8888/admin (Привет, админы локалхостов!)
Можно или нет перенастроить ACL я сходу не понял. Зашли с локалхоста и нас тут же обломали: браузер должен быть либо > Firefox 13, либо > Chrome 19, либо >IE какой-то там. В клонах RHEL по умолчанию идет Firefox 10 как LTS. Ладно, эти грабли мы знаем, как обойти: лезем в about:config, создаем general.useragent.override и т.д. проходим первый уровень, идем дальше.
При заходе на /admin открывается скрытый iframe и в нем проверяют коннект до ресурсов всемирной паутины, а именно до http://vk.com/images/hat.gif и в случае успеха вываливают большое красное окно: "А-та-та! Нельзя проводить экзамен на такой незащищенной системе!" Пофигу, что это [клон] RHEL 6, вдруг утечка? Ладно, локальный proxy уже есть, он в контакт не пускает, включаем в браузере. Второй уровень пройден.
Все это ПО периодически требует всевозможные ключи активации и подтверждения подлинности участников экзамена. Что само по себе похвально. Пароли и ключи для расшифровки заданий присылаются из центра. Есть также забавный термин "рассадка". Поставляется бинарный (зашифрованный?) файл с данными об учащихся, пароли доступа и проч. Прочие уровни этой занимательной игрушки уже проходил не я, но как я понял со слов человека из "центра" мы дошли дальше все, т.к. некоторые срубились уже на запуске серверной части.:) В итоге после полноценного сетапинга, ПО для экзамена знает всех поименно, рисует картинки с активированными компьютерами учеников и вобще все по-взрослому. Дизайн среднестатистический, строгий.
https://docs.google.com/open?id=0B1Mh3B-dAA04eUdONDNWVmQ2bUk
Thursday, October 18, 2012
iostat -x
This is reposting on an article from http://dom.as/2009/03/11/iostat/ which I liked very much.
My favorite Linux tool in DB work is ‘iostat -x’ (and I really really want to see whenever I’m doing any kind of performance analysis), yet I had to learn its limitations and properties. For example, I took 1s snapshot from a slightly overloaded 16-disk database box:
avg-cpu: %user %nice %system %iowait %steal %idle 8.12 0.00 2.57 21.65 0.00 67.66 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \ sda 7684.00 19.00 2420.00 498.00 81848.00 5287.00 \ avgrq-sz avgqu-sz await svctm %util 29.86 32.99 11.17 0.34 100.00I pasted this somewhere on IRC, and got “doesn’t look too healthy” and that it is disk-bound. Now, to understand if it really is, one has to understand what iostat tells here.
First line of numbers shows that we’ve got plenty of CPU resources (thats because nowadays it is quite difficult to get a box with not enough CPU power, and I/O still seems to be bottleneck) – and we have more threads waiting for I/O than we have CPU execution (that sounds normal).
Now the actual per-disk statistics are where one should look. I used to prefer %util over general %iowait (I couldn’t really explain what %iostat is, and I can say what %util is). I don’t know why, but iostat has most interesting bits at the end, and not so interesting at the start:
- %util: how much time did the storage device have outstanding work (was busy). In proper RAID environments it is more like “how much time did at least one disk in RAID array have something to do”. I’m deliberately excluding any kind of cache here – if request can be served from cache, the chance is quite negligible it will show up in %util, unlike in other values. What this also means – the RAID subsystem can be loaded from 6.25% (one disk doing the work) to 100% (all of them busy). Thats quite a lot of insight in single value of ’100%’, isn’t it?
- svctm: Though manual says “The average service time (in milliseconds) for I/O requests that were issued to the device.”, it isn’t exactly that when you look at multiple-disk systems. What it says is, “when your I/O subsystem is busy, how fast does it respond requests overall”. Actually, less you load your system, higher svctm is (as there’re less outstanding requests, and average time to serve them goes up). Of course, at some certain moment, when I/O becomes really overloaded, you can see svctm going up. One can tweak /sys/block/sda/queue/nr_requests based on this – to avoid overloading I/O controller, though that is really rarely needed.
- await. One of my favorites – how fast do requests go through. It is just an average, how long it takes to serve a request for a device, once it gets into device queue, to final “OK”. Low = good, high = bad. There’re few gotchas here – even though different reads can have different performance properties (middle of disk, outer areas of disk, etc), the biggest difference is between reads and writes. Reads take time, writes can be instant (write caching at underlying layers..). As 80% of requests were reads, we can try to account for that by doing 11.17/0.8 math, to get 14ms figure. Thats quite high – systems that aren’t loaded can show ~5ms times (which isn’t that far away from 4ms rotation time of 15krpm disk).
- avgqu-sz: Very very very important value – how many requests are there in a request queue. Low = either your system is not loaded, or has serialized I/O and cannot utilize underlying storage properly. High = your software stack is scalable enough to load properly underlying I/O. Queue size equal to amount of disks means (in best case of request distribution) that all your disks are busy. Queue size higher than amount of disks means that you are already trading I/O response time for better throughput (disks can optimize order of operations if they know them beforehand, thats what NCQ – Native Command Queueing does). If one complains about I/O performance issues when avgqu-sz is lower, then it is application specific stuff, that can be resolved with more aggressive read-ahead, less fsyncs, etc. One interesting part – avqu-sz, await, svctm and %util are iterdependent ( await = avgqu-sz * svctm / (%util/100)
- avgrq-sz: Just an average request size. Quite often will look like a block size of some kind – can indicate what kind of workload happens. This is already post-merging, so lots of adjacent block operations will bump this up. Also, if database page is 16k, though filesystem or volume manager block is 32k, this will be seen in avgrq-sz. Large requests indicate there’s some big batch/stream task going on.
- wsec/s & rsec/s: Sectors read and written per second. Divide by 2048, and you’ll get megabytes per second. I wanted to write this isn’t important, but remembered all the non-database people who store videos on filesystems :) So, if megabytes per second matter, these values are important (and can be seen in ‘vmstat’ output too). If not, for various database people there are other ones:
- r/s & w/s: Read and write requests per second. This is already post-merging, and in proper I/O setups reads will mean blocking random read (serial reads are quite often merged), and writes will mean non-blocking random write (as underlying cache can allow to serve the OS instantly). These numbers are the ones that are the I/O capacity figures, though of course, depending on how much pressure underlying I/O subsystem gets (queue size!), they can vary. And as mentioned above, on rotational media it is possible to trade response time (which is not that important in parallel workloads) for better throughput.
- rrqm/s & wrqm/s: How many requests were merged by block layer. In ideal world, there should be no merges at I/O level, because applications would have done it ages ago. Ideals differ though, for others it is good to have kernel doing this job, so they don’t have to do it inside application. Quite often there will be way less merges, because applications which tend to write adjacent blocks, also tend to wait after every write (see my rant on I/O schedulers). Reads however can be merged way easier – especially if application does “read ahead” block by block. Another reason for merges is simple block size mismatch – 16k database pages on top of 8k database pages will cause adjacent block reads, which would be merged by block layer. On some systems read of two adjacent pages would result in 1MB reads, but thats another rant :)
- Device: – just to make sure, that you’re looking at the right device. :-)
- System has healthy high load (request queue has two-requests-per-disk)
- Average request time is double the value one would expect from idle system, it isn’t too harmful, but one can do better
- It is reading
8040MB/s from disks, at 2420 requests/s. Thats quite high performance from inexpensive 2u database server (shameless plug: X4240 :) - High amount of merges comes from LVM snapshots, can be ignored
- System is alive, healthy and kicking, no matter what anyone says :)
Tuesday, October 2, 2012
Why Linux is ahead of Windows
It is known, that Windows 8 will get support for a new security feature called Intel SMEP. I'll omit the description what's that for because more important is Linux has had this feature since spring of the last year! And that is not all of it! Today one more technology that is to debut in next year's Intel Haswell line of processors got it's way into Linux kernel. Supervisor Mode Access Prevention.
http://www.phoronix.com/scan.php?page=news_item&px=MTE5NzI
http://forums.grsecurity.net/viewtopic.php?f=7&t=3046
Man, don't ask me, when Windows is going to get that!
http://www.phoronix.com/scan.php?page=news_item&px=MTE5NzI
http://forums.grsecurity.net/viewtopic.php?f=7&t=3046
Man, don't ask me, when Windows is going to get that!
Labels:
Why Linux is ahead of Windows
Sunday, July 15, 2012
the major failure for Ubuntu as Enterprise desktop
Recently I get into situation when I had to run Ubuntu 10.04 LTS on new Sandy Bridge platform. What was my surprise when I realised Canonical "Enterprise" Desktop distro has no support for the new Intel graphical core. There are only community repos without any garante available. In contrary Red Hat Enterprise Linux has got graphic driver update and kernel backports a year ago. From my point of view this is the major failure for Ubuntu as Enterprise desktop. I'm very disappointed with this because a lot of configuration was made considering Ubuntu so called Long Term Support.
upd: Aug 2013
With an update release of the next Ubuntu LTS 12.04.3 the drastic changes were introduced. Kernel version was updated from 3.5 up to 3.8. It's not good at all. Third party vendors usually make drivers for specific kernel. Consequantly no compatibility at all for mission critical enterprise solutions. What is the point of LTS if it's like a new release?
upd: Aug 2013
With an update release of the next Ubuntu LTS 12.04.3 the drastic changes were introduced. Kernel version was updated from 3.5 up to 3.8. It's not good at all. Third party vendors usually make drivers for specific kernel. Consequantly no compatibility at all for mission critical enterprise solutions. What is the point of LTS if it's like a new release?
Wednesday, June 27, 2012
snmptrap
http://tiebing.blogspot.com/2007/05/snmptrap-command.html
0: specific trap ID, 0 when generic trap ID is not 6
'': system up time (timestamp)
You can also add "mib type value" to the end of the command to send them along with the trap
v2 trap:
snmptrap -v 2c -c public 192.168.100.40 "" 1.2.3.4.0
"": system Uptime (when given as empty "", the system finds itself)
1.2.3.4.0: trap OID
again, add "mib type value" to the end if you want.
v3 trap:
snmptrap -v 3 -a SHA -A 1234567890 -x DES -X 1234567890 -l authPriv -u myuser -e "123abc" 192.168.100.1 "" linkUp.0
snmptrap
-v 1 -c public 192.168.100.40 1.2.3.4 192.168.254.60 3 0
''
syntax:
-v 1: version 1
-c public: use community string "public"
192.168.100.40: trap manager's IP address (trap destination)
1.2.3.4: enterprise (i.e. type of device/object generating trap, default to system ID in mib II, can be empty)
192.168.254.50: trap source IP address (device's IP address)
3: generic trap ID, they are:
syntax:
-v 1: version 1
-c public: use community string "public"
192.168.100.40: trap manager's IP address (trap destination)
1.2.3.4: enterprise (i.e. type of device/object generating trap, default to system ID in mib II, can be empty)
192.168.254.50: trap source IP address (device's IP address)
3: generic trap ID, they are:
- coldStart(0),
- warmStart(1),
- linkDown(2),
- linkUp(3),
- authenticationFailure(4),
- egpNeighborLoss(5),
- enterpriseSpecific(6)
0: specific trap ID, 0 when generic trap ID is not 6
'': system up time (timestamp)
You can also add "mib type value" to the end of the command to send them along with the trap
v2 trap:
snmptrap -v 2c -c public 192.168.100.40 "" 1.2.3.4.0
"": system Uptime (when given as empty "", the system finds itself)
1.2.3.4.0: trap OID
again, add "mib type value" to the end if you want.
v3 trap:
snmptrap -v 3 -a SHA -A 1234567890 -x DES -X 1234567890 -l authPriv -u myuser -e "123abc" 192.168.100.1 "" linkUp.0
Labels:
memo,
monitoring,
zabbix
Saturday, May 12, 2012
Partitioning of Zabbix Database (PostgreSQL)
We are using highloaded Zabbix database (currently PostgreSQL 9.1). The database contains tables with approximately 1700 million (2 billion) rows. The common technique to optimize bulk INSERTs and DELETEs with tables like this is to use Table Partitioning. I managed to do this by this guidelines (in Russian): http://www.zabbix.com/wiki/non-english/ru/partitioning_in_postgresql.
Although the manual is extensive, during the production usage some problem have arose.
First, it was a problem with SELECT queries to partitioned tables that was fixed by a separate patch before Zabbix v.1.8.13. As of this version the patch is no more required. It brings a significant performance gain.
Secondly, it's about the fact that procedures in the manual above have an unpleasant effect. The data of first INSERT query that triggers a partition creation are always lost. I didn't have any desire to rewrite the proposed SQL, instead I implemented the obvious fix. To make INSERTs into history* tables beforehand. This also helps to solve the problem with table load. Partitioning procedures put a lock on tables and this can easily results in Zabbix malfunction. Planing will help you to avoid being awaken at 0.00 when Partitioning usually starts.
Example:
The third fix is related to conditions that are wrongly used in the recipe above. I just put the corrected versions here. Corresponding lines are in the check_condition definition and I made them bold.
Hope, this notes will help someone.
Although the manual is extensive, during the production usage some problem have arose.
First, it was a problem with SELECT queries to partitioned tables that was fixed by a separate patch before Zabbix v.1.8.13. As of this version the patch is no more required. It brings a significant performance gain.
Secondly, it's about the fact that procedures in the manual above have an unpleasant effect. The data of first INSERT query that triggers a partition creation are always lost. I didn't have any desire to rewrite the proposed SQL, instead I implemented the obvious fix. To make INSERTs into history* tables beforehand. This also helps to solve the problem with table load. Partitioning procedures put a lock on tables and this can easily results in Zabbix malfunction. Planing will help you to avoid being awaken at 0.00 when Partitioning usually starts.
Example:
INSERT INTO history_uint (itemid, clock, value) VALUES ('1', extract( epoch FROM date_trunc('hour', now() + interval '12 hour')), '1')";
The third fix is related to conditions that are wrongly used in the recipe above. I just put the corrected versions here. Corresponding lines are in the check_condition definition and I made them bold.
Hope, this notes will help someone.
CREATE OR REPLACE FUNCTION "public"."partition_every_day" (in parentoid oid, in scheme varchar, in clock int4) RETURNS text AS $BODY$ declare parent text := parentoid::regclass; suffix text := to_char (to_timestamp(clock), '_YYYY_MM_DD'); child text := scheme || (select relname from pg_class where oid = parentoid) || suffix; check_beg varchar; check_end varchar; check_condition varchar; check_field varchar := null; tmp record; script text := ''; i int := 0; j int := 0; begin perform child::regclass; return child; exception when undefined_table then check_beg = extract(epoch FROM date_trunc('day', to_timestamp(clock))); check_end = extract(epoch FROM date_trunc('day', to_timestamp(clock) + interval '1 day')); j = (select count(*) from pg_attribute where attrelid = parentoid and attnum >0); for tmp in select attname from pg_attribute where attrelid = parentoid and attnum >0 order by attnum loop i = i + 1; script = script || 'NEW.' || tmp.attname || case i when j then '' else ',' end; if (col_description (parentoid, i) ~* 'partition') and (check_field is null) then check_field = tmp.attname; end if; end loop; script = script || ')'; check_condition = '( ' || check_field || ' >= ' || quote_literal (check_beg) || ' and ' || check_field || ' < ' || quote_literal (check_end) || ' )'; execute 'create table ' || child || ' ( constraint partition' || suffix || ' check ' || check_condition || ' ) inherits (' || parent || ')'; execute 'create rule route' || suffix || ' as ' || ' on insert to ' || parent || ' where ' || check_condition || ' do instead insert into ' || child || ' values (' || script; perform copy_constraints(parent, child); perform copy_indexes(parent, child); execute 'GRANT SELECT ON ' || child || ' TO some_other_user'; execute 'GRANT ALL ON ' || child || ' TO zabbix'; return child; end; $BODY$ LANGUAGE 'plpgsql'-----
CREATE OR REPLACE FUNCTION "public"."partition_every_month" (in parentoid oid, in scheme varchar, in clock int4) RETURNS text AS $BODY$ declare parent text := parentoid::regclass; suffix text := to_char (to_timestamp(clock), '_YYYY_MM'); child text := scheme || (select relname from pg_class where oid = parentoid) || suffix; check_beg varchar; check_end varchar; check_condition varchar; check_field varchar := null; tmp record; script text := ''; i int := 0; j int := 0; begin perform child::regclass; return child; exception when undefined_table then check_beg = extract(epoch FROM date_trunc('month', to_timestamp(clock))); check_end = extract(epoch FROM date_trunc('month', to_timestamp(clock) + interval '1 month')); j = (select count(*) from pg_attribute where attrelid = parentoid and attnum >0); for tmp in select attname from pg_attribute where attrelid = parentoid and attnum >0 order by attnum loop i = i + 1; script = script || 'NEW.' || tmp.attname || case i when j then '' else ',' end; if (col_description (parentoid, i) ~* 'partition') and (check_field is null) then check_field = tmp.attname; end if; end loop; script = script || ')'; check_condition = '( ' || check_field || ' >= ' || quote_literal (check_beg) || ' and ' || check_field || ' < ' || quote_literal (check_end) || ' )'; execute 'create table ' || child || ' ( constraint partition' || suffix || ' check ' || check_condition || ' ) inherits (' || parent || ')'; execute 'create rule route' || suffix || ' as ' || ' on insert to ' || parent || ' where ' || check_condition || ' do instead insert into ' || child || ' values (' || script; perform copy_constraints(parent, child); perform copy_indexes(parent, child); execute 'GRANT SELECT ON ' || child || ' TO xxx'; execute 'GRANT ALL ON ' || child || ' TO zabbix'; return child; end; $BODY$ LANGUAGE 'plpgsql'
Labels:
monitoring,
postgres,
zabbix
Friday, May 11, 2012
Linux ACLs vs Solaris ACLs
Today I studied the difference between Access Control List implementation in Linux and Solaris. Generally the latest specification of ACL is contained in the long suffered POSIX 1003.1e. The man page for acl is dated by 2002. Not so fresh indeed! To preserve a compatibility with traditional rwx approach, an IEEE crowd jumped over it's head and invented something unusable. Linux community implemented that in even more unusable manner.
Let's look at this crap. Here is [Linux] manual cite:
An ACL entry contains an entry tag type, an optional entry tag qualifier, and a set of permissions. ... The qualifier denotes the identifier of a user or a group, for entries with tag types of ACL_USER or ACL_GROUP, respectively. Entries with tag types other than ACL_USER or ACL_GROUP have no defined qualifiers.
This qualifier-with-no-identifier thing means that you write u:jack or g:staff to denote user jack and group staff. It's intuitive.
Then we have an obscure permission inheritance behavior. Look at "OBJECT CREATION AND DEFAULT ACLs" section of acl manual:
Ok, that is fine.
Huh?! Thus only specified permissions remains or what?
Other parts of Linux ACL manual made me crazy too. Correspondence between traditional model and new is non-linear, has conditions and behavior modifiers. It depends on Effective User ID, special ACL mask or presence of other bits. In addition this permissions are managed with special utilities setfacl and getfactl. Standard chmod, ls and other tools are not aware of ACLs!
Now let's look at the Solaris variant. Refer to "Solaris ZFS Administration Guide". The give up with 10 year old standard and armed Solaris with NFSv4 ACL model, which is closer to the NT-style ACLs. This eventually makes me think, that Microsoft engineers invented something that is still lacking in UNIX.
Just use ls -v and you already know, who may what. For ACL managment just use chmod and other tools with additional parameters.
Why there is no NFSv4 ACLs on Linux? I found that SGI made patches in 2008. See excellent presentation
Unfortunately SGI died.
Let's look at this crap. Here is [Linux] manual cite:
An ACL entry contains an entry tag type, an optional entry tag qualifier, and a set of permissions. ... The qualifier denotes the identifier of a user or a group, for entries with tag types of ACL_USER or ACL_GROUP, respectively. Entries with tag types other than ACL_USER or ACL_GROUP have no defined qualifiers.
This qualifier-with-no-identifier thing means that you write u:jack or g:staff to denote user jack and group staff. It's intuitive.
Then we have an obscure permission inheritance behavior. Look at "OBJECT CREATION AND DEFAULT ACLs" section of acl manual:
1. The new object inherits the default ACL of the containing directory as its access ACL.
Ok, that is fine.
2. The access ACL entries corresponding to the file permission bits are modified so that they contain no permissions that are not contained in the permissions specified by the mode parameter.
Huh?! Thus only specified permissions remains or what?
Other parts of Linux ACL manual made me crazy too. Correspondence between traditional model and new is non-linear, has conditions and behavior modifiers. It depends on Effective User ID, special ACL mask or presence of other bits. In addition this permissions are managed with special utilities setfacl and getfactl. Standard chmod, ls and other tools are not aware of ACLs!
Now let's look at the Solaris variant. Refer to "Solaris ZFS Administration Guide". The give up with 10 year old standard and armed Solaris with NFSv4 ACL model, which is closer to the NT-style ACLs. This eventually makes me think, that Microsoft engineers invented something that is still lacking in UNIX.
# chmod A+user:gozer:read_data/execute:allow test.dir
# ls -dv test.dir
drwxr-xr-x+ 2 root root 2 Aug 31 12:02 test.dir
0:user:gozer:list_directory/read_data/execute:allow
1:owner@::deny
2:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
/append_data/write_xattr/execute/write_attributes/write_acl
/write_owner:allow
3:group@:add_file/write_data/add_subdirectory/append_data:deny
4:group@:list_directory/read_data/execute:allow
5:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr
/write_attributes/write_acl/write_owner:deny
6:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
/read_acl/synchronize:allow
|
Just use ls -v and you already know, who may what. For ACL managment just use chmod and other tools with additional parameters.
Why there is no NFSv4 ACLs on Linux? I found that SGI made patches in 2008. See excellent presentation
Unfortunately SGI died.
Labels:
unusable
Thursday, April 12, 2012
Microsoft Browser
This is an old humor, 90th, about IE6 most likely. Nostalgic time when Microsoft gave lots of reasons to laff.
Labels:
joke
Monday, March 26, 2012
cfengine3: howto make a symlink
files: guest.need_update:: "/etc/localtime" handle => "timezone_link", comment => "Timezone link", link_from => ln_s("/usr/share/zoneinfo/Asia/Novosibirsk");
Friday, March 23, 2012
Fresh presentation about PostgreSQL performance tuning (2012)
PostgreSQL
Performance Tuning
BRUCE MOMJIAN (Senior Database Architect in EnterpriseDB)
Source: http://momjian.us/main/presentations/overview.html
There is lot's of other interesting stuff also.
Copy on Google: https://docs.google.com/open?id=0B1Mh3B-dAA04NXcyOVZJSEtUZ0M0c0FFMkF0T0hKQQ
Labels:
postgres
Saturday, March 17, 2012
Три поросёнка. (китайский вариант)
Жили-были три маленьких свиньи. Целыми днями они только и делали что играли.
Узнали они о так называемой Серой Угрозе. Линь-линь построил домик из соломы...
Лян-Лян - из листьев и какашек...
А премудрый Люнь-Люнь стал закладывать фундамент, чтобы выстроить настоящий крепкий дом.
Сокращение сроков производства с потерей качества позволила Линь-Линю и Лян-Ляну продолжить веселье.
Люнь-Люнь же старательно строил крепость из кирпичей...
И вот однажды, когда Люнь-Люнь докладывал шестой уровень...
Он почувствовал, что за его работой кто-то пристально наблюдает...
...Тут и сказочке конец.
(from trinixy.ru)
Labels:
joke
Monday, March 12, 2012
a script to remove old partitions from a zabbix database
This script searches the database (postgres) for the most old tables and removes if they are old enough. Only for %Y_%m_%d format.
#!/usr/bin/python import sys; #import datetime; from datetime import datetime, date, time, timedelta, tzinfo; import time; import psycopg2; ###### User params ######## TABLES = ( "history", "history_log", "history_uint", "history_str" ); MONTH = 3; LIMIT = 3; db_params = "host='localhost' dbname='zabbix' user='zabbix' password='xxx'"; ##### End of User params ## th_mth_ago = (datetime.today() - timedelta(MONTH*365/12)).strftime("%Y_%m_%d"); log_ft = "%Y-%m-%d %H:%M:%S"; def pg(curs,table,q): cnds = []; curs.execute(q); for row in curs: cnds.append(row[0]); return cnds; def main(): conn = psycopg2.connect(db_params); curs = conn.cursor(); print "%s" %(datetime.now().strftime(log_ft)); for table in TABLES: table_long = "partitions." + table + "_" + th_mth_ago; query = "SELECT tablename FROM pg_tables WHERE schemaname='partitions' AND tablename like '" + table + "%' order by tablename limit " + str(LIMIT); candidates = pg(curs,table,query); if candidates: print "\n %s oldest tables:" % (LIMIT); for c_table in candidates: sys.stdout.write( '\n\t' + c_table + ''); ct = str.rsplit(c_table,table+"_"); candidate_time = ct[1]; dt = datetime.strptime(candidate_time,"%Y_%m_%d"); # print dt.strftime("%Y_%m_%d"); if (dt.strftime("%Y_%m_%d") < th_mth_ago): sys.stdout.write(' is older then tree months.\n'); try: q1 = "ALTER TABLE partitions." + table + "_" + dt.strftime("%Y_%m_%d") + " NO INHERIT " + table; print "%s: %s;" %(datetime.now().strftime(log_ft),q1); curs.execute(q1); except: print "%s: No inheritance!"%(datetime.now().strftime(log_ft)); conn.rollback(); pass; sys.exc_clear(); try: q2 = "DROP RULE route_" + dt.strftime("%Y_%m_%d") + " ON " + table; print "%s: %s;" %(datetime.now().strftime(log_ft),q2); curs.execute(q2); except: print "%s: No INSERT rule!" %(datetime.now().strftime(log_ft)); conn.rollback(); pass; sys.exc_clear(); conn.commit(); q3 = "DROP TABLE partitions." + table + "_" + dt.strftime("%Y_%m_%d") + " CASCADE"; print "%s: %s;" %(datetime.now().strftime(log_ft),q3); curs.execute(q3); conn.commit(); print "%s: OK" %(datetime.now().strftime(log_ft)); print "\n%s: Finished" % (datetime.now().strftime(log_ft)); curs.close() conn.close() sys.stdout.flush() sys.stderr.flush() if __name__ == "__main__": sys.exit(main()) # author: crypt # url: http://crypt47.blogspot.com # EOF
Wednesday, March 7, 2012
Tuesday, February 28, 2012
Zabbix Performance Tuning by Alexei Vladishev at Zabbix Conference 2011
http://www.slideshare.net/xsbr/alexei-vladishev-zabbixperformancetuning
Labels:
zabbix
Monday, February 27, 2012
Friday, February 10, 2012
psycopg2 template
#!/usr/bin/python
'''
description here: This is a template how to run PostgreSQL queries from a python script
Params:
$1 - argv...
'''
import sys
import psycopg2
from datetime import *
if len(sys.argv) <> 3:
print "Usage:\n %s param1 param2" % sys.argv[0]
sys.exit(0)
param1=sys.argv[1]
param2=sys.argv[2]
conn_string = "host='localhost' dbname='db' user='user' password='pass'"
query = "select bla bla bla " +str(param1)
def main():
conn = psycopg2.connect(conn_string)
curs = conn.cursor()
curs.execute(query)
for row in curs:
print "%s" % row
# for update/insert
# conn.commit()
curs.close()
conn.close()
sys.stdout.flush()
sys.stderr.flush()
if __name__ == "__main__":
sys.exit(main())
Wednesday, February 8, 2012
Noninteractive (unattended) locale change on Debian
Edit: /etc/locale.gen
Execute: DEBIAN_FRONTEND=noninteractive dpkg-reconfigure locales
Execute: DEBIAN_FRONTEND=noninteractive dpkg-reconfigure locales
Monday, January 9, 2012
Encrypted /home with cryptsetup easy and fast
1: partition;
modprobe dm-crypt aes-x86_64
cryptsetup -c aes-xts-plain -y -s 512 luksFormat /dev/sda4
# Also remember that in order for other accounts to still be able to unlock this partition, you need to add all Linux account passwords as keys to unlock the partition.
cryptsetup luksAddKey /dev/sda4
cryptsetup luksOpen /dev/sda4 home
ls /dev/mapper/home
mkfs.ext4 -O dir_index /dev/mapper/home
<volume fstype="crypt" path="/dev/sda4" mountpoint="/home" />
2: file
losetup -f /path/file
cryptsetup -c aes-xts-plain -y -s 512 luksFormat /dev/loop0
# Also remember that in order for other accounts to still be able to unlock this partition, you need to add all Linux account passwords as keys to unlock the partition.
cryptsetup luksAddKey /dev/loop0
cryptsetup luksOpen /dev/loop0 home
ls /dev/mapper/home
mkfs.ext4 -O dir_index /dev/mapper/home
<volume fstype="crypt" path="/dev/sda4" mountpoint="/home" options="loop"/>
Oracle (Sun) Java 6 in Ubuntu 10.04 LTS
First Part:
- Download *rpm.bin package from official site. Try to install it and it fails with unpacked rpm package.
- Convert with alien: alien --scripts ./jre-6u30-linux-amd64.rpm
dpkg -i jre_1.6.030-1_amd64.deb
- update-alternatives --get-selections |grep java
- update-alternatives --install "/usr/bin/java" "java" "/usr/java/default/bin/java" 1
- update-alternatives --set java /usr/java/default/bin/java
- update-alternatives --list java to check
- update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/java/default/bin/java" 1
- update-alternatives --set javaws /usr/java/default/bin/javaws
Google chrome found plugin without problems. The steps for Firefox are following:
- Check what's in /usr/lib/mozilla/plugins/ and remove open jdk icedtea or whatever. Name "libjavaplugin.so" is important.
- Simply link it this way:
ln -s /usr/java/default/lib/amd64/libnpjp2.so /usr/lib/mozilla/plugins/libjavaplugin.so
Or use update-alternatives:
update-alternatives --install "/usr/lib/mozilla/plugins/libjavaplugin.so" "mozilla-javaplugin" "/usr/java/default/lib/amd64/libnpjp2.so" 1
update-alternatives --set mozilla-javaplugin "/usr/java/default/lib/amd64/libnpjp2.so"
Third part. Don't forget about Look&Feel:
(Default Ubuntu location)
/etc/java-6-sun/swing.properties
# uncomment to set the default look and feel to GTK
swing.defaultlaf=com.sun.java.swing.plaf.gtk.GTKLookAndFeel
(New with authintic Java install):
/usr/java/default/lib/swing.properties
# uncomment to set the default look and feel to GTK
swing.defaultlaf=com.sun.java.swing.plaf.gtk.GTKLookAndFeel
set JAVA_HOME=/usr/java/default; in /etc/environment or export in .bashrc
Subscribe to:
Posts (Atom)