Archive for the ‘Linux’ Category

* speed up debian

Posted on April 7th, 2012 by Alex. Filed under Linux.

The following tricks to speed up several things in Debian, were collected through multiple forums.

Application Acceleration

There is a package called preload, an “adaptive readahead daemon”. It analyzes the programs the user starts and tries to accelerate them based on the previous users behavior. If it works, is too early for me to decide. Since the effect (if any) is probably slowly creeping in, I will not be able to feel any difference. In a week or month, I am planning to deactivate the service temporarily to see the difference.

In this post the author also mentions to append the lines vm.swappiness=20 and vm.vfs_cache_pressure=50 to /etc/sysctl.conf and describes them shortly. So I will not do that here.


First it is always a good idea to deactivate services, which are of no use. For instance why should someone start the bluetooth daemon, if there is no bluetooth device around? A little tool called sysv-rc-conf, lets the user comfortably decide, what should be started in which run level.

An incredible speedup during booting, I could observe after installing e4rat. It is my understanding e4rat ensures that files which are required during the booting sequence, are nicely lined up on the hard disk, so that all of them can be read in one single linear read request. Therefore e4rat will not be very helpful, if you boot from a SDD, since there are no mechanical parts which need to move around slowing down the accesses. There is a Debian package in the download section, so there is no need to compile the software by yourself. After installation (dpkg -i e4rat.deb as root) The following instructions are partly taken from the README file and are quite straight forward.

  1. Restart your machine and add the following line to the kernel parameters in GRUB by editing the entry (press e) and edit the line starting with kernel. Simply append the following line and press Ctrl+x to boot the kernel:
  2. If you are the only user of your machine, wait till the X server comes up and log in. This ensures that even your graphical login is accelerated. If you are working on a multiuser machine, just wait till you see the login prompt of the X server.
  3. Change to the tty (by pressing Ctrl+Alt+1), login as root and execute init 1.
  4. It will take some time to collect the data. It says something like Sending TERM to applications . In my case it took around 2 minutes till init 1 has been reached. Now you are in the single user environment. Type in your root password to login.
  5. Execute e4rat-realloc /var/lib/e4rat/startup.log
  6. Add init=/sbin/e4rat-preload to the kernel parameters (like you did in the first step). Please note that, if a new kernel is installed, the configuration file of GRUP is regenerated and your entry vanishes. Even if you edit the entry as you did in the first step, these changes are not permanent. To permanently add this line, change /etc/default/grub to
    # If you change this file, run 'update-grub' afterwards to update
    # /boot/grub/grub.cfg.
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX="rootfstype=ext4 resume=/dev/sda6 i915.i916_enable_rc6=1 init=/sbin/e4rat-preload"

    The entries should be self explanatory.

  7. That’s all. Reboot by typing init 6 and enjoy the speedup.

To use e4rat, the package auditd needs to be removed, since there is a conflicting dependency. If you use your own kernel, you need to activate CONFIG_AUDIT and CONFIG_AUDITSYSCALL.

Cleaning up your system

Apart from the usual apt-get clean, apt-get autoremove and this one, I found a view more tricks for cleaning up the system.


There are many packages which are installed during the course of time. They grow in numbers, since Debian is a system that is installed once and runs ever since. For instance, if I compiled something that is not in the repositories and which depended on additional libraries, these libraries were most likely to remain in the system, since I probably forgot, which libraries and their dependencies I installed. But after compilation these development files can be removed again, hence just consuming hard disk space. A neat program to find all packages that got installed manually and do not have a dependency or a reason to be installed is deborphan.

Executing deborphan -H -a -z shows this list. Carefully go through that list and purge the packages which are obsolete. However there are a few you might want to keep. For instance, if you use ekiga and installed it manually, deborphan also lists that one.


You might also want to try out localepurge which “reclaims disk space removing unneeded localizations”. However the full descriptions states:

This is a script to recover disk space wasted for unneeded locales, Gnome/KDE localizations and localized man pages. Depending on the installation, it is possible to save some 200, 300, or even more mega bytes of disk space dedicated for localization you will most probably never have any use for. It is run automagically upon completion of any apt installation actions.

Please note, that this tool is a hack which is *not* integrated with Debian’s package management system and therefore is not for the faint of heart. This program interferes with the Debian package management and does provoke strange, but usually harmless, behaviour of programs related with apt/dpkg like dpkg-repack, reportbug, etc. Responsibility for its usage and possible breakage of your system therefore lies in the sysadmin’s (your) hands.

Please definitely do abstain from reporting any such bugs blaming localepurge if you break your system by using it. If you don’t know what you are doing and can’t handle any resulting breakage on your own then please simply don’t use this package.

So it is up to you to use it.

This list is by far not complete. New things will be added, if I stumble upon them. If you know something that helped you speeding up your system, please let me know. One condition though: The system should be the same like before. It is obvious that, if someone wants to speed up the machine to its maximum, he/she will not use e.g. Gnome, KDE or other heavy weights. But many users want to stick with what they have and just tune this a bit.


* compressed and encrypted dropbox

Posted on February 24th, 2012 by Alex. Filed under Linux.

Introduction and Requirements

Somewhere I found step by step instructions, how to automatically compress and encrypt files which are copied into the misty cloudy space of Dropbox. But I am not able to recall, where it was. So I had to “reinvent” the wheel and wrote another manual about how to do it. I assume that you already installed Dropbox successfully. If not, you can download the software from here. The installation is quite easy and does not involve many steps.

These instructions are not limited to Dropbox only. They can be used with any other cloud service that synchronizes directories on your Linux box.

The sequence, in which files are copied into the cloud, has to be in a specific order. First the files are compressed, after which they are encrypted and then uploaded. If they are first encrypted and then compressed, the compression algorithm does not have much chances to reduce the file size since all your text files are, well, a random mix of bits after the encryption step.

There are several tools required to accomplish these tasks: encfs and fusecompress. Both are packages in the standard Debian system. With

# apt-get install encfs fusecompress

they can be installed easily. Then three directories are required:

$ mkdir ~/Dropbox/.encrypted ~/.Dropbox_compressed ~/Dropbox_Encrypted

If a file is copied into ~/Dropbox_Encrypted, it is compressed by fusecompress and stored in ~/.Dropbox_compressed from where encfs picks it up, encrypts it and stores it in ~/Dropbox/.encrypted which is automatically synchronized with the online space since it is in the Dropbox directory. This is done automatically and transparently so that it does not need any user intervention.


To do so, you will have to mount these directories. First ~/.Dropbox_compressed is mounted to compress the data. Execute as normal user

$ fusecompress -o fc_c:bzip2 ~/.Dropbox_compressed/ ~/Dropbox_Encrypted/

As far as I know, you will have to use the absolute path names. fusecompress supports multiple compression algorithms: bzip2, lzo, zlib, lzma, and none. Taken from the manual it says:

Lzo is the fastest, bzip2 has high compression ratio, but it is the slowest, zlib is somewhere between them in terms of speed and compression ratio and lzma has highest compression ratio, it’s compression speed is better than bzip2 and decompression is fast. The none compression method is there for testing only as it doesn’t compress data, it copies the data without any modification (fusecompress’ header is added).

Currently I am using bzip2, but I would like to use lzma due to its higher performance and compression ratios. However in Debian Squeeze the lzma support is not compiled into fusecompress. Since I do not have the time to compile and try out things currently, I am happy with bzip2. To see your supported compression methods type

$ fusecompress

in a terminal window.

If you get an error message stating: fuse: failed to open /dev/fuse: Permission denied, check if your username is included in the group fuse by typing the command groups in the terminal window. If fuse is not mentioned in the list, you have to add yourself to it. Execute as root

# adduser yourUsername fuse

and log out and in again to update the group permissions. After that the command above should work.

If everything worked, you can try and copy a text file into ~/Dropbox_Encrypted/. A listing of this directory will show the file and its original size. Change to ~/.Dropbox_compressed/ and you will find the same file. However the file size is different and should be much smaller.


Now everything that is copied to ~/.Dropbox_compressed/ shall be encrypted. Before we can continue, you should tell the Dropbox service not to synchronize the encfs configuration file that is automatically created by encfs in the encrypted directory by executing

$ dropbox exclude add ~/Dropbox/.encrypted/.encfs6.xml

encfs will create a small file that contains all your settings. However the critical part is that it also contains your password as a hash value. If you can live with the inconvenience that you have to copy the file to every computer manually, before they can gain access to the encrypted directory or if you are the only person who should have access to it, you should not upload that file.

After that you are ready for the encryption step. Execute as normal user

$ encfs ~/Dropbox/.encrypted/ ~/Dropbox_Encrypted/
Creating new encrypted volume.
Please choose from one of the following options:
enter "x" for expert configuration mode,
enter "p" for pre-configured paranoia mode,
anything else, or an empty line will select standard mode.

Before creating the encrypted volume, encfs is going to ask lots of questions in the expert mode. Most of the settings require you to read the manual, so the easier but also save alternative is to use the pre-configured paranoia mode by hitting p. After entering and confirming a password, you are all set. If you copy a file into ~/Dropbox_Encrypted/, a new file should show up in ~/Dropbox/.encrypted/ automatically. But since it is encrypted, not only the file itself but also the filename is disguised.

What to do after reboot?

If you reboot your Linux box, the mounted directories are lost and the compression and encryption chain is broken. If you start Dropbox now, the encryption folder looks empty and the Dropbox service will delete all your files that were uploaded into the cloud. You have to ensure that the chain is reestablished, before Dropbox is started. Since I am the only user of my machine, I included a few lines into /etc/rc.local which is executed during startup:

echo "Mounting compression/encFS for Dropbox..."
sudo -u yourUsername fusecompress -o fc_c:bzip2 /home/alefel/.Dropbox_compressed/ /home/alefel/Dropbox_Encrypted/
sudo -u yourUsername encfs /home/alefel/Dropbox/.encrypted/ /home/alefel/.Dropbox_compressed/

The sudo command is required, since the directories should belong to you and not root. If they belong to root, write access is not permitted. Unfortunately encfs requires to type in your password and it asks for it in the console. However if you use Ubuntu then you might probably not even see the prompt and you are immediately forwarded to the graphical login screen. It is not very comfortable to switch back or to make the prompt visible, type in the password, switch back to the graphical login and proceed. But luckily encfs can accept the password also from standard input without prompting. If you are the only user and do not mind the password to be mentioned here in clear text, you can alter the lines above to:

echo "Mounting compression/encFS for Dropbox..."
sudo -u yourUsername fusecompress -o fc_c:bzip2 /home/alefel/.Dropbox_compressed/ /home/alefel/Dropbox_Encrypted/
sudo -u yourUsername echo "yourPassword" | encfs --stdinpass /home/alefel/Dropbox/.encrypted/ /home/alefel/.Dropbox_compressed/

and you are good to go. As an alternative you also might want to try out gnome-encfs or cryptkeeper. But since I like this solution, I did not try them and hence cannot tell anything about them.

Unmounting the directories

First shut down the Dropbox service:

$ dropbox stop

Then you can unmount the directories by typing

$ fusermount -u ~/.Dropbox_compressed
$ fusermount -u ~/Dropbox_Encrypted


* cmake error at … library not found

Posted on December 30th, 2011 by Alex. Filed under Linux.

If you use cmake and have a library whose header files cannot be found, since you compiled your own library and put it somewhere where you can find it again, cmake will most likely abort with an error message like:

— Current HG revision is cf9be9344356
— Assuming this is a tarball (release) build for 2011.4.0
— Found wxWidgets: TRUE
— Found TIFF: /usr/include
— Found JPEG: /usr/include
— Found PNG: /usr/include
— WARNING: you are using the obsolete ‘PKGCONFIG’ macro use FindPkgConfig
— Found OPENEXR: /usr/lib/;/usr/lib/;/usr/lib/;/usr/lib/;/usr/lib/
— GLUT Found
— Found Glew:
CMake Error at CMakeModules/FindPANO13.cmake:76 (MESSAGE):
libpano13 version: 2.9.18 required, 2.9.14 found
Call Stack (most recent call first):
CMakeLists.txt:235 (FIND_PACKAGE)

(This comes from the compilation of hugin.) So how to tell cmake, where to find it? Open CMakeModules/FindPANO13.cmake in an editor of your choice. Somewhere at the beginning you can find an code snippet looking similar to this one:

FIND_PATH(PANO13_INCLUDE_DIR pano13/panorama.h
NAMES pano13
“${PANO13_INCLUDE_DIR}/pano13/Release LIB CMD”
“${PANO13_INCLUDE_DIR}/pano13/Release CMD/Win32”

As you can see, a few standard directories are searched for the panorama.h file. If for whatever reason the library cannot be found, remember the name of the include path (here: PANO13_INCLUDE_DIR) and invoke cmake like in the following example:

cmake -DCMAKE_INSTALL_PREFIX=/usr/local -DPANO13_INCLUDE_DIR=/path/to/your/library/

This should work.


* shutdown script

Posted on September 29th, 2011 by Alex. Filed under Linux.

If computers are not in use, I would like to shut dem down automatically to save power and also to extend the lifetime of the hardware (especially mechanical HDD). Many users forget or are too lazy to shut down their machines, when they e.g. leave their office. That is why I wrote this little script that can be executed as a cronjob as root.

It will not shut down the computer, if one of the following conditions are met:

  1. There exists is a file called dontShutdown in /tmp
  2. Users are logged in
  3. Any screen sockets exist.
  4. The load on the machine is more than 20% (of one core/CPU)

You can comment/decomment the if conditions in the script accordingly. The reason to include the last condition was to ensure, that the computer is shut down, if it is idle, since I have the habit of not terminating screen sessions. However, if you e.g. download a big file in a screen session, which does not require many CPU resources, then the computer is also shut down. Similar, if you decrease the load of the CPU to such a low level that e.g. service which come sporadically into life, cause a short necessity of CPU resources, the machine is never shut down. Adjust the parameters according to you needs. After that, ensure that the script is executed periodically by e.g. putting it into the cron of the root user (here every 30 minutes):

30 * * * * /root/ > /dev/null 2>&1


The annoying legal part: Do whatever you want with the file.



* tar is a coward

Posted on September 6th, 2011 by Alex. Filed under Linux.

While doing some work in the console, sometimes the Linux user comes across rather curious things especially error messages. While some error messages are very cryptic and confusing resulting in a few hours spend for research, some are funny and entertaining. Most of the times it happens due to misspellings or due to some important parameters which were forgotten. For example if you like to unpack the content of a .tar.gz file, but instead of typing

tar xfvz somefile.tar.gz

your mind wanders around and finally you execute

tar cfvz somefile.tar.gz

which creates a tar archive, the resulting error message will be:

~/tmp$ tar cfvz somefile.tar.gz
tar: Cowardly refusing to create an empty archive
Try `tar --help' or `tar --usage' for more information.

Such a coward….


* debian squeeze on sony c series – vpccb15fg

Posted on September 3rd, 2011 by Alex. Filed under Linux.

After my previous laptop broke, I was looking for a new one. After a not convincing chat session with Dell experts, I finally decided to go for a Sony laptop (VPCCB15FG/B).

How to install Debian Squeeze on Sony C-Series?

Step 1: Installing a minimal system

Unfortunately the kernel that is shipped with Squeeze, does not include the driver for the Ethernet card (the output of lspci including the kernel modules for the hardware is at the end of this post). Hence the “Smaller CDs” (Installing Debian GNU/Linux via the Internet) do not work. The “Small CDs” install a minimal but working Debian on the machine, so I went for this one. If the installer in one of the last steps allows you to select additional software such as Desktop System or different kind of servers, remove the selections from all entries. You will be left behind with a minimal system and a black console with a blinking cursor. If you are new to the terminal or console, simply follow the steps. If you are an advanced user, feel free to skip the elaborated texts.

Step 2: Activating the network adapter

Newer kernels include the driver that is necessary to access the Internet to download additional packets. So first you have to install several packages via the offline method. Selecting a mirror that is close should not a problem. The following packages and all its dependencies you need to select for installation: kernel-package, libncurses5-dev (if you want to use make menuconfig), and for later or if you want to use the provided .config file (refer to the end of this post) firmware-linux-nonfree and lzop. You also need the kernel sources form At the time of writing I used the most current version 3.0.4, so the .config file is for that version. After installing all packages via the apt-get offline method, perform the following steps:

  • copy the kernel sources/patches to /usr/src/
  • unpack the kernel sources: tar xfj linux-3.0.tar.bz2
  • unpack patches that you downloaded eventually to patch the kernel to a more current version: bunzip patch-3.0.4.bz2
  • create a link from linux to the newly created directory: ln -sv linux-3.0 linux
  • unpack (bunzip2, copy the file to /usr/src/linux and rename (mv config-3.0.4 .config) it to .config (Since the file starts with a ., it is not visible by the normal ls command. Do an ls -la instead.)
  • cd /usr/src/linux
  • Apply the patch, if necessary: patch -p1 < ../patch-3.0.4
  • If you want to configure your own kernel, do a make menuconfig -j4 and make the necessary changes.
  • Execute the following commands to compile the kernel:
    • export CONCURRENCY_LEVEL="4" (This will use all 2 cores and its Hyper-threading Technology to compile the kernel, so that the compilation takes something like 5 minutes.)
    • make-kpkg kernel-image --append-to-version -1 (The last number needs to change whenever you want to compile your kernel again, in case you forgot to select a driver or functionality)
  • If you get an error, check if you installed firmware-linux-nonfree. Otherwise you can find the kernel in a .deb package format in /usr/src/
  • Install the new kernel: dpkg -i /usr/src/linux-image-3.0.4-1_3.0.4-1-10.00.Custom_amd64.deb
  • And reboot: init 6

Step 3: Installing the software

After the new kernel came up, you probably need to configure your network adapter in /etc/network/interfaces. Have a look at the example below and adapt it to your needs.

auto eth0
# for dhcp
# iface eth0 inet dhcp
# for static addresses
iface eth0 inet static
# i do not want to have wake on lan and hence switch it off. Not mandatory to have it. ethtool needs to be installed separately
post-up /sbin/ethtool -s $IFACE wol d
post-down /sbin/ethtool -s $IFACE wol d

Restart the network: /etc/init.d/networking stop and immediately after that /etc/init.d/networking start.

Now you should be able to use tasksel or aptitude to select the necessary software comfortably.

Some Notes

The laptop has 2 graphics adapters: The integrated Intel HD3000 and ATI/AMD Radeon HD 6600M. Since I use Linux mainly to do work and not for fancy games, I intended to run only the power saving Intel graphics card. Compiz runs without any problems on the Intel adapter. Switching to the more powerful ATI card while the X-Server is not running, should be possible with vgaswitcheroo, but I have not tested it yet. Apart from that everything works without any problems.

In case, the kernel compilation is too complicated, you might want to consider Ubuntu instead. However in my experience Ubuntu with its bleeding edge software lacks stability. E.g. in my case Unity and its desktop crashed several times. Also Gparted crashed while it was repartitioning my hard drive. Fortunately nothing serious happened.



00:00.0 Host bridge: Intel Corporation Sandy Bridge DRAM Controller (rev 09)
Kernel driver in use: agpgart-intel

00:01.0 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) (prog-if 00 [Normal decode])
Kernel driver in use: pcieport

00:02.0 VGA compatible controller: Intel Corporation Sandy Bridge Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller])
Kernel driver in use: i915

00:16.0 Communication controller: Intel Corporation Cougar Point HECI Controller #1 (rev 04)

00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #2 (rev 04) (prog-if 20 [EHCI])
Kernel driver in use: ehci_hcd

00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio Controller (rev 04)
Kernel driver in use: HDA Intel

00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 (rev b4) (prog-if 00 [Normal decode])
Kernel driver in use: pcieport

00:1c.1 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 2 (rev b4) (prog-if 00 [Normal decode])
Kernel driver in use: pcieport

00:1c.2 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 3 (rev b4) (prog-if 00 [Normal decode])
Kernel driver in use: pcieport

00:1c.3 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 4 (rev b4) (prog-if 00 [Normal decode])
Kernel driver in use: pcieport

00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #1 (rev 04) (prog-if 20 [EHCI])
Kernel driver in use: ehci_hcd

00:1f.0 ISA bridge: Intel Corporation Cougar Point LPC Controller (rev 04)

00:1f.2 SATA controller: Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 04) (prog-if 01 [AHCI 1.0])
Kernel driver in use: ahci

00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 04)
Kernel driver in use: i801_smbus

01:00.0 VGA compatible controller: ATI Technologies Inc NI Whistler [AMD Radeon HD 6600M Series] (prog-if 00 [VGA controller])
Kernel driver in use: radeon

02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01)
Kernel driver in use: ath9k

03:00.0 SD Host controller: Ricoh Co Ltd Device e823 (rev 04)
Kernel driver in use: sdhci-pci

03:00.1 System peripheral: Ricoh Co Ltd Device e232 (rev 04)

04:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) (prog-if 30)
Kernel driver in use: xhci_hcd

05:00.0 Ethernet controller: Atheros Communications Device 1083 (rev c0)
Kernel driver in use: atl1c


Bus 002 Device 003: ID 045e:0745 Microsoft Corp.
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 05ca:18c0 Ricoh Co., Ltd
Bus 001 Device 004: ID 0489:e00f Foxconn / Hon Hai
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

kernel .config

This .config provides compiled-in modules for the entire hardware found in the laptop. It does not include many things that are compiled in the standard kernels such as IPv6 or other (in my case unused) protocols and file systems. Feel free to adapt this configuration file according to your needs.


* wifi kill switch and ifconfig wlan0 up/down

Posted on June 14th, 2011 by Alex. Filed under Linux.

To save battery power and precious runtime of a laptop, it is advisable to switch off devices that are not in use. Such a device would be the built in WIFI or WLAN adapter, hence many laptops have a hardware kill switch. However in Linux most probably the WLAN adapter is not able to reconnect to any network after the kill switch has been activated (that means the adapter has been switched off) and deactivated (WLAN adapter switched on). Instead the user has to type in ifconfig wlan0 up as root in the terminal to get things back to work.

To avoid this inconvenience and to automate the process, udev is of great help. As already described in this post and more extensively for instance on this web page, udev is a daemon running in the background, which is able to run user programs, if something happens to the e.g. hardware of the laptop. That can be a simple event of plugging in the mouse into the USB port or (as in our case discussed here) the change of the state of the kill switch.

First, we have to get to know, how to grab the kill switch event. For that execute

udevadm monitor

as root. The output will look like

root@jitu:~# udevadm monitor
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

Then you turn the kill switch and more information will be printed:

root@jitu:/home/alefel# udevadm monitor
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

KERNEL[1307970746.488938] change /devices/pci0000:00/0000:00:1c.5/0000:0c:00.0/ieee80211/phy0/rfkill0 (rfkill)
UDEV [1307970748.257897] change /devices/pci0000:00/0000:00:1c.5/0000:0c:00.0/ieee80211/phy0/rfkill0 (rfkill)

This means that the KERNEL and UDEV detected a change at the device /devices/pci0000:00....../rfkill0. Copy this path and paste it as a parameter to this command to get to know, what actually changes here:

root@jitu:~# udevadm info -a -p /devices/pci0000:00/0000:00:1c.5/0000:0c:00.0/ieee80211/phy0/rfkill0

Many sections are printed on the screen each starting with “looking at device …”. The most important section is the very first one. Yours will look similar to this output:

looking at device '/devices/pci0000:00/0000:00:1c.5/0000:0c:00.0/ieee80211/phy0/rfkill0':

Important is the value of ATTR{state}. On my laptop the value is 1, if the kill switch is off (and the adapter is turned on) and 2, if the kill switch is activated. You might have different values here.

Now we have to let udev know, what should be done (or which program should be executed), if the state of the kill switch is changed. Since the switch is network related, I created new rules in 70-persistent-net.rules:

root@jitu:~# anyEditorYouLike /etc/udev/rules.d/70-persistent-net.rules

Add the 2 lines in this file:

SUBSYSTEM=="rfkill", ACTION=="change", ATTR{state}=="1", ATTR{type}=="wlan", RUN+="/sbin/start-stop-daemon --start --background --pidfile /var/run/network/bogus --startas /sbin/ifup -- --allow hotplug wlan0"
SUBSYSTEM=="rfkill", ACTION=="change", ATTR{state}=="2", ATTR{type}=="wlan", RUN+="/sbin/start-stop-daemon --start --background --pidfile /var/run/network/bogus --startas /sbin/ifdown -- --allow hotplug wlan0"

The purpose of the other ATTR values is basically to identify the device that will trigger the execution of the program. They can be found in the sections of udevadm info above. If these lines do not work, you can also try out these lines instead:

SUBSYSTEM=="rfkill", ACTION=="change", ATTR{state}=="1", ATTR{type}=="wlan", RUN+="/sbin/ifup --allow hotplug wlan0"
SUBSYSTEM=="rfkill", ACTION=="change", ATTR{state}=="2", ATTR{type}=="wlan", RUN+="/sbin/ifdown --allow hotplug wlan0"

If you turn the kill switch, the wifi interface (wlan0 in my case) is coming up or put down automatically. If not something went wrong. To support your debugging, stop the udev daemon and restart it in debugging mode:

root@jitu:~# /etc/init.dudev stop
Stopping the hotplug events dispatcher: udevd.
root@jitu:~# udevd --debug

Many lines of code will be spilled out. On top there should be a line

Jun 14 00:07:56 jitu udevd[24780]: reading '/etc/udev/rules.d/70-persistent-net.rules' as rules file

without an error. After all the initialization text has been thrown out, turn the kill switch and observe, what udev does:

Jun 14 00:08:04 jitu udevd[24780]: seq 3022 queued, 'change' 'rfkill'
Jun 14 00:08:04 jitu udevd[24780]: seq 3022 forked new worker [25359]
Jun 14 00:08:04 jitu udevd-work[25359]: seq 3022 running
Jun 14 00:08:04 jitu udevd-work[25359]: device 0xb975fc08 has devpath '/devices/pci0000:00/0000:00:1c.5/0000:0c:00.0/ieee80211/phy0/rfkill0'
Jun 14 00:08:04 jitu udevd-work[25359]: RUN '/sbin/ifup --allow hotplug wlan0' /etc/udev/rules.d/70-persistent-net.rules:18
Jun 14 00:08:04 jitu udevd-work[25359]: RUN 'socket:@/org/freedesktop/hal/udev_event' /lib/udev/rules.d/90-hal.rules:2
Jun 14 00:08:04 jitu udevd-work[25359]: '/sbin/ifup --allow hotplug wlan0' started
Jun 14 00:08:06 jitu udevd-work[25359]: '/sbin/ifup --allow hotplug wlan0' returned with exitcode 0
Jun 14 00:08:06 jitu udevd-work[25359]: passed 256 bytes to socket monitor 0xb975fc08
Jun 14 00:08:06 jitu udevd-work[25359]: passed -1 bytes to netlink monitor 0xb975fb28

If the exitcode of the executed program differs from 0, something went wrong. Try to understand the error message that is also printed out. If your rule is not executed at all, check the ATTR parameters in the rule file above.

Loading different settings depending on the SSID

For even more comfort, a network configuration should be loaded based on the SSID of the current access point. For that purpose wpa_supplicant supports ID tags for each network configuration in /etc/wpa_supplicant/wpa_supplicant.conf.

Example files:





# The loopback network interface
auto lo
iface lo inet loopback

#-- WLAN interface
#auto wlan0
allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
#wpa-debug-level 2

# no id_str given
iface default inet dhcp

iface home inet dhcp

iface AndroidAP inet static

Do not use tabs or any other white space character before iface. It will not work otherwise which took me some time to figure that out. So whenever the WLAN adapter connects to a access point with a matching SSID, the ID tag is given to wpa_action which configures the adapter according to the settings given in interfaces. This makes the manual reconfiguring of the network adapter and the restarting of the network every time obsolete.


* changing gnome proxy setting in the console

Posted on May 20th, 2011 by Alex. Filed under Linux.

For someone who uses a laptop and has changing environments (e.g. home and office) in which e.g. the proxy settings differ, it is annoying to change the proxy settings every time, when the laptop is taken home or to office. Although the Network Proxy Preferences in Gnome (under System -> Preferences) support multiple locations, it is still annoying, if also other settings need to be changed along with the proxy. E.g. in my setup the laptop checks, if a second monitor is attached to the external VGA port. If so, the office environment is loaded together with the static office IP for the Ethernet adapter, the IPs for the DNS server, and the settings for the proxy. If no secondary screen is found, the laptop assumes to be in the home environment and makes the necessary changes again fully automated.

While changes of the DNS settings, the screen and the IP can be easily done through scripts which are executed as root, it is difficult to change the Gnome proxy settings in the same way. First of all each user has his own proxy settings stored in $HOME/.gconf/system/http_proxy/%gconf.xml and $HOME/.gconf/system/proxy/%gconf.xml. Secondly changing the file of the user in his home directory does not have any effect, since the changes of the file need to be communicated to the DBus. Fortunately Gnome is shipped with a program called gconftool, which allows the user to make the changes and then to inform DBus about them as well.

With gconftool -R /system/http_proxy and gconftool -R /system/proxy the current settings regarding the proxy are displayed. So, if the command sudo is used, the root user is able to read out the settings of another user. E.g.

sudo -u $USERNAME gconftool -R /system/http_proxy

With the -s parameter, the settings can be changed. If root would like to activate the proxy for a specific user, root would use

sudo -u $USERNAME gconftool -s /system/http_proxy/use_http_proxy -t bool true. The new settings can be again displayed by sudo -u $USERNAME gconftool -R /system/http_proxy executed in the same console. However if the user checks the setting in his Network Proxy Preferences in Gnome, no change happened and the new settings are not applied.

The reason is either the missing or wrong DBus Session Address, which is like a hook allowing programs to communicate with the bus. Every user has a different address to the bus. It can be displayed by executing dbus-launch in a console. So while gconftool is executed, we have to inform it, which address to use to access the bus. The following script is executed as root and changes the settings properly for a specific user.

#– unset the proxy for gnome

#– extract the DBus Session Address
DBUS_SESSION_BUS_ADDRESS=$(sudo -u $USER dbus-launch –autolaunch=`cat /var/lib/dbus/machine-id` | grep BUS_ADDRESS | cut -d ‘=’ -f 2-)

#– deactivate the proxy
sudo -u $USER DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS gconftool -s /system/http_proxy/use_http_proxy -t bool false
sudo -u $USER DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS gconftool -s /system/proxy/mode -t string direct


RSS Feeds: