Monday, November 18, 2013

Suricata capture.kernel_drops caused by interrupt problems from single queue network cards

(update: added an even more simple solution )

For quite some time I was confronted with a huge amount of kernel_drops with Suricata. After quite some time of debugging, and with the help of the Suricata developers, I was able to pinpoint the problem to the NIC and e1000e driver.

Confirmation of the problem

The of top -H shows me that only a single AFPacketeth thread is processing incoming traffic.
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND             
28769 root      20   0 2792m 1.8g 822m S    0 22.9   0:00.49 Suricata-Main       
28770 root      20   0 2792m 1.8g 822m S   65 22.9   1:49.98 AFPacketeth11       
28771 root      20   0 2792m 1.8g 822m S    0 22.9   0:00.04 AFPacketeth12       
28772 root      20   0 2792m 1.8g 822m S    0 22.9   0:00.04 AFPacketeth13       
28773 root      20   0 2792m 1.8g 822m S    0 22.9   0:00.04 AFPacketeth14       
28774 root      20   0 2792m 1.8g 822m S    0 22.9   0:00.04 AFPacketeth15       
28775 root      20   0 2792m 1.8g 822m S    0 22.9   0:00.04 AFPacketeth16       
28776 root      20   0 2792m 1.8g 822m S    0 22.9   0:00.04 AFPacketeth17       
28777 root      20   0 2792m 1.8g 822m S    0 22.9   0:00.04 AFPacketeth18 

Also looking at the stats.log file confirms that only one single thread receives the traffic:
# tail -f /var/log/suricata/stats.log  | fgrep kernel_packet
capture.kernel_packets    | AFPacketeth11             | 42477691
capture.kernel_packets    | AFPacketeth12             | 609
capture.kernel_packets    | AFPacketeth13             | 283
capture.kernel_packets    | AFPacketeth14             | 408
capture.kernel_packets    | AFPacketeth15             | 436
capture.kernel_packets    | AFPacketeth16             | 464
capture.kernel_packets    | AFPacketeth17             | 613
capture.kernel_packets    | AFPacketeth18             | 307

The problem is that only one CPU core is receiving interrupts of the network card.
           CPU0   CPU1   CPU2   CPU3   CPU4   CPU5   CPU6   CPU7       
 54:   14302207      0      0      0      0      0      0      0   PCI-MSI-edge   eth1-rx-0
 55:          6      0      0      0      0      0      0      0   PCI-MSI-edge   eth1-tx-0
 56:          5      0      0      0      0      0      0      0   PCI-MSI-edge   eth1

Note: it's possible that other cores 'sometimes' get a few interrupts, but the majority of them will go to one core. This is caused by your network card that has only one receive queue. The e1000e driver is one example. On the e1000e mailinglist an Intel developer confirms this:
On Thursday 25 December 2008 17:59:40,Jeff Kirsher wrote:
> While the hardware supports 2 Tx and 2 Rx queues we do not have the
> full implementation in our Linux e1000e driver to support this.
> The performance to be gained from multiple queues is very small, and
> was not a requirement for our Linux software.
> So its not supported right now, and I doubt it will be implemented for
> e1000e as there is likely to be very little benefit.
> Although we may reconsider based on customer feedback.

Solution 1: Set AF_PACKET cluster_type to cluster_flow

In the configuration file you can specify the technique used for loadbalancing. 
The documentation states:
Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
This is only supported for Linux kernel > 3.1
possible value are:
  * cluster_round_robin: round robin load balancing
  * cluster_flow: all packets of a given flow are send to the same socket
  * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket
To benefit from a similar effect of solution 2 RPS and RFS you should set cluster-type: cluster_flow

Solution 2: RPS and RFS

After hours of searching and reading I ended up on the wiki of FreeBSD and the where the multiqueue support is compared between Linux and FreeBSD. It is also explained in the documentation of Linux. In short it seems that activating RPS (Receive Packet Steering) and/or RFS (Receive Flow Steering) could solve my problem as it offers packet distribution functionality to mono-queue NICs. 

The graphs below (taken from the FreeBSD wiki) make it a little bit more visual.
Receive Packet Steering
Receive Flow Steering



Configuration

The trick is to first tell irqbalance to stop balancing for the specific IRQs. Edit /etc/default/irqbalance and set IRQBALANCE_BANNED_INTERRUPTS to a space separated list of the IRQs he shouldn't look at.
Don't forget to restart irqbalance: stop irqbalance; start irqbalance

Then I activate affinity for the first CPU core to pin the interrupts on one specific CPU.
# first core is boss over the first IRQ step
echo 1 > /proc/irq/${irq}/smp_affinity

And activate RPS and RFS for the interface as explained in the article. 
echo "fe" > /sys/class/net/${iface}/queues/rx-0/rps_cpus
echo 32768 > /proc/sys/net/core/rps_sock_flow_entries
echo 4096 > /sys/class/net/${iface}/queues/rx-0/rps_flow_cnt
Notice that the "fe" is a binary mask, the lowest core is on the right:
    CPU cores
ff  1111 1111
fe  1111 1110

Confirmation

Without stopping Suricata we now see with top -H  (do a few > to sort on 'command') that all the receive threads are busy now:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND      
28769 root      20   0 2792m 1.9g 822m S    0 24.4   0:00.51 Suricata-Main
28770 root      20   0 2792m 1.9g 822m S    1 24.4   1:58.39 AFPacketeth11
28771 root      20   0 2792m 1.9g 822m S   11 24.4   0:01.42 AFPacketeth12
28772 root      20   0 2792m 1.9g 822m S    9 24.4   0:01.31 AFPacketeth13
28773 root      20   0 2792m 1.9g 822m S   14 24.4   0:01.48 AFPacketeth14
28774 root      20   0 2792m 1.9g 822m S   10 24.4   0:01.26 AFPacketeth15
28775 root      20   0 2792m 1.9g 822m S   10 24.4   0:01.32 AFPacketeth16
28776 root      20   0 2792m 1.9g 822m S   10 24.4   0:01.35 AFPacketeth17
28777 root      20   0 2792m 1.9g 822m S   10 24.4   0:01.34 AFPacketeth18
And a second confirmation from the stats.log file:
capture.kernel_packets    | AFPacketeth11             | 73211942
capture.kernel_packets    | AFPacketeth12             | 5193446
capture.kernel_packets    | AFPacketeth13             | 5176265
capture.kernel_packets    | AFPacketeth14             | 4997172
capture.kernel_packets    | AFPacketeth15             | 5059325
capture.kernel_packets    | AFPacketeth16             | 5727925
capture.kernel_packets    | AFPacketeth17             | 5499172
capture.kernel_packets    | AFPacketeth18             | 4503364

Do note that your /proc/interrupts will stay the same, unbalanced. This is because one CPU core is handling these interrupts and placing them in the software steering queues. 

Of course do not forget to finetune the hardware buffers and offloading as explained in Erics blogpost here.

Now the IRQ problem is gone and we'll see (almost) no drops anymore (numbers are not raising).
capture.kernel_drops      | AFPacketeth11             | 250
capture.kernel_drops      | AFPacketeth12             | 322
capture.kernel_drops      | AFPacketeth13             | 37
capture.kernel_drops      | AFPacketeth14             | 147
capture.kernel_drops      | AFPacketeth15             | 184
capture.kernel_drops      | AFPacketeth16             | 130
capture.kernel_drops      | AFPacketeth17             | 358
capture.kernel_drops      | AFPacketeth18             | 91



Thursday, November 14, 2013

Suricata monitoring with Zabbix or other

When your Suricata IDS system runs you might want to monitor it for various reasons. One of them is to be able to get alerts when things go wrong, but another reason could be that you want to be able to measure the impact of configuration files.

Sample graphic of kernel_drops and kernel_packets

By default Suricata has a configuration option to activate a stats.log file. This file is great as it dumps very detailed numbers of memory use, drops etc. However the downside is that the file can be difficult to parse as the number of lines depends on the number of threads you configured Suricata, plus that each thread logs his counters separately. We need to do some magic here to transform the file into something usable.
My ultimate goal is to send the statistics to my Zabbix monitoring server.

So let's first activate the stats file by editing /etc/suricata/suricata.yaml and setting:
  - stats:
      enabled: yes
      filename: stats.log
      append: no
      interval: 60

As explained shortly the problem with the stats file is that it outputs the same counter per thread.

$ tail -n 500 /var/log/suricata/stats.log | fgrep kernel_packets
capture.kernel_packets    | AFPacketeth01             | 94395
capture.kernel_packets    | AFPacketeth02             | 34208336
capture.kernel_packets    | AFPacketeth03             | 38117599
capture.kernel_packets    | AFPacketeth04             | 35171352
capture.kernel_packets    | AFPacketeth05             | 36019808
capture.kernel_packets    | AFPacketeth06             | 41114606
capture.kernel_packets    | AFPacketeth07             | 34827247

capture.kernel_packets    | AFPacketeth08             | 47159880

For a system where 8 threads have been configured there will be 367 new lines in the stats.log file.
I did write a script to consolidate all counters into one single output. You can find the script here.

usage: suricata.py [-h] [-z] [-q] [-v]

Consolidate the suricata stats file.

optional arguments:
  -h, --help     show this help message and exit
  -z, --zabbix   Send output to zabbix
  -q, --quiet    Be quiet (do not print to stdout)
  -v, --verbose  be more verbose 

The output is something like:
# /etc/zabbix/suricata.py 
- suricata[decoder.udp] 33189293
- suricata[decoder.avg_pkt_size] 5523
- suricata[decoder.ipv6_in_ipv6] 0
- suricata[tcp.segment_memcap_drop] 6266714
- suricata[flow.emerg_mode_over] 0
- suricata[defrag.ipv4.timeouts] 0
(snip)


Now the next step is to feed this to your graphing tool, or Zabbix.
For Zabbix you will first need to create a template with the counters/items, triggers and graphs. You can also find this xml file here. (doesn't contain all the counters yet)  Now go to Configuration > Templates and click on Import and import the xml.
You only need to go to Configuration > Hosts > (select host) > Templates, add the "Template IDS" template and Save.

Now on the IDS system add the following line to your /etc/crontab to execute the script every minute and send the output to zabbix:

*/1 * * * * root   /etc/zabbix/suricata.py -q -z

Wait some time and go to Monitoring > Latest data to plot graphs of the data coming in.

Graph of segment_memcap_drops over time


Some counters




Tuesday, September 10, 2013

RTIR automatic constituency by email sender

RTIR is an open source incident handling system targeted for computer security teams.
The tool allows you to structure your tickets and tasks in a more advanced flow than a "standard ticket". There are Incident Reports, Incidents and Investigations.
Each ticket has also some additional meta-data assigned, such as a the constituency.

For some reasons you might want to give access to your ticket tracking system to your constituents. Each user will then be able to see all tickets that either they reported, or where they are owner. However if you have two users who are in the same constituency, let's say "FOO" you need to use the add_constituency script as explained in these instructions.

It is also possible to pre-set the constituency by email as explained here, however the problem is that this doesn't work if you don't run the mailserver on the ticketing system and fetch your mails via a tool like fetchmail.

Fortunately RTIR is powerful enough and can be extended by the use of scrips to automate certain actions. To do this follow these instructions:
  • As admin go to Tools > Configuration > Queues > select. 
  • Then select “Incident Reports” and go to the “Scrips” tab. 
  • Create or Edit the Scrip called “AutoConstituency”:
    • Condition: On Create
    • Action: User Defined
    • Template: Global template: Blank
    • Stage: TransactionCreate
  • In the custom action preparation code set:
    • return 1;
  • In the custom action cleanup code:
# Domains we want to move
# From most specific to least specific, so first hit will match
# List of regular expressions of the email sender.
# Stop at first hit, so most specific = what’s applied
my %domain_map = (
                   '\@google\.com'      => "GOOGLE",
                   '\@fosdem\.org'     => "FOSDEM",
                   '\@brucon\.org'     => "BRUCON",
                   '.*'                      => "Other",
                );
# Check each of our defined domains for a match,
# stop at the first hit
foreach my $domainKey (keys %domain_map ){
    if($self->TicketObj->RequestorAddresses =~ /^.*?${domainKey}/) {
        # Domain matches - set the right Constituency
        my ($status, $msg) = $self->TicketObj->AddCustomFieldValue(
            Field => 'Constituency',
            Value => $domain_map{$domainKey},
        );
        RT->Logger->warning( "##### Couldn't set CF: $msg" ) unless $status;
        return 0;
    }
}

  • save the scrip
This will do the magic.


Sunday, April 21, 2013

Resolving DNS requests for malware analysis

INetSim is an interesting tool for simulating common internet services. It's worth gold when you want to run an air-gaped network and still simulate "the internet" so that malicious software continue to work as they should be. While they do some activity you monitor their behavior on your victim machine, and on the INetSim server.

One thing that was frustrating me was the default behavior of the DNS service within INetSim. When a client connects to INetSim to resolve a DNS name the service will always respond with the same fixed IP address.

This is rather annoying when analyzing malwares that use multiple DNS names to connect to multiple command and control servers, or just performing test-connections. As the DNS service replies with the same IP, and the malware establishes a TCP connection to that IP you can't make the relation between the domain name and the communication. There is no clear way for you to know what tcp session, and what communication matches which command and control server.

Except if you hardcode the different domain names in the configuration file of course. However, how do you encode a name in that configuration if you don't know the name yet? Basic static analysis could already have given you a name, however that is likely not the case if the malware was packed with a non-standard packer. So should I first spend loads of time to manually unpack the malware? Or should I run the malware, look at the DNS requests, encode these DNS names in my INetSim, restore from snapshot, re-infect the machine, see new domain names, re-encode them, etc...

Being a lazy person this doesn't motivate me a lot, so when I was following Lenny Zeltser's SANS 610 class some time ago I threw him this question. Fortunately I was not the first one with this frustration and another student if him wrote a python script to do incremental DNS responses and gave me a copy. However I didn't like the idea to use yet-another-additional-tool, so I looked into the code of INetSim and a hack looked easier than expected.

So I wrote a simple patch that added this new functionality:
- for each dns request, a new IP is returned (i++)
- requesting the same dns name twice returns the same IP of course (I save it in the temporary hash with the hardcoded hostnames)
- the start IP is the default IP
- functionality is activated by a configuration flag.

There is however a limitation: once the x.y.z.254 IP is reached the DNS response will stay the same IP.

This patch has been sent to the developers of INetSim, and they were going to look into it to integrate it when they would have a little bit more time. It seems I have forgotten to publish this 5 months old code here.

You can apply the patch using the following commands:
tar xzf inetsim-1.2.3.tar.gz
wget http://documentation.vandeplas.com/inetsim/inetsim_incrementaldns.patch
cd inetsim-1.2.3/
patch -p1 < ../inetsim_incrementaldns.patch
This will output: (the fuzz is because the patch was for INetSim v1.2.2)
patching file conf/inetsim.conf
patching file lib/INetSim/Config.pm
patching file lib/INetSim/DNS.pm
Hunk #1 succeeded at 67 with fuzz 2.
Now install INetSim and start it up and perform some DNS queries. We see the responses increment each time, while staying consistent when requesting the same name.




Sunday, March 10, 2013

MISP - Malware Information Sharing Platform

It took some time, but finally we were able to release MISP as open source software.
This MISP - Malware Information Sharing Platform has been developed in collaboration between the Belgian Defence CERT and the NATO Computer Incident Response Capability (NATO NCIRC) and is today actively developed and used in production.

The problem that we experienced in the past was the difficulty to exchange information about (targeted) malwares and attacks within a group of trusted partners, or a bilateral agreement.
Even today much of the information exchange happens in unstructured reports where you have to copy-paste the information in your own text-files that you then have to parse to export to (N)IDS and systems like log-searches, etc...

To facilitate the exchange of technical information we started to develop this tool, that :
- automates exchange of IOC
- enables you to have your internal IOC database accessible (include uploaded malwares and reports,...)
- correlates different malwares and events
- generates files in various export formats (snort/IDS, plain text, xml, ...)  (in the future MAEC and other IOC formats)
- synchronizes with instances of external trust-groups

This results in faster detection of targeted attacks and improves the detection ratio while reducing the false positives. We also avoid reversing similar malware as we know very fast that others already worked on this malware.
The Red October malware for example gives a similar view:

(...)
Feel free to have a look at the (pdf) documentation in the INSTALL directory.
For the future version (v2) this is the develop branch: https://github.com/MISP/MISP/tree/develop/INSTALL
We are actively developing this tool and many (code, documentation, export formats,...) improvements are coming.
Feel free to fork the code, play with it, make some patches and send us the pull requests.
Feel free to contact me if you have questions or remarks.

The project site is: https://github.com/MISP/MISP
There are 2 branches:
- develop: future v2 with many many improvements
- main: current stable version, but it has some bugs in the synchronization functionality (we're fixing these)

Some people might think about CIF, the collective intelligence framework, however both tools are different. Perhaps integration might be provided between those two in the future.

Sunday, November 6, 2011

Migration from Drupal to Blogger

(update2: added link to Drupal 7 version by Nico Schlömer)
(update: Migrated the code to GitHub and implemented minor improvements.)


It has finally happened: this blog is migrated away from Drupal to Blogger. My reason to move towards Blogger (and thus not away from Drupal) is very simple: No need to patch/update the application.
An important thing for me is that I wanted to keep all my blogposts, timestamps and comments. Unfortunately it looks like most people move away from Blogger towards Drupal and the web is full of code and information to export your data from Blogger in XML and then import it into Drupal.
But information how to upload everything into Blogger was nihil.
So I wrote a php script to do the export while keeping:
  • posts
  • comments
  • tags / categories 
  • publishing date
However there are a few quircks.
  • It seems to work only for Drupal 6, not 7.
  • Comments are (partially) anonymized because of a security feature of Blogger
  • URLs are not customizable, so you will create dead links
  • Images are not changed or imported. So manual work is still necessary
To use this script first create your blog into Blogger, create a test posts and export it to XML. Then run my php script and copy paste the output towards the bottom of the XML, where your test post is located.
Save the file and import it again in Blogger. It usually takes some time, but in the end you get the message that everything is imported correctly.

The code to do this is located here: https://github.com/cvandeplas/inet_scripts/blob/master/drupal_to_blogger.php .
A version for Drupal 7 has been written by Nico Schlömer is located here: https://github.com/nschloe/drupal2blogger

Saturday, October 22, 2011

Book review: BackTrack 5 Wireless Penetration Testing

Just before my holiday I got a new mail from Packt publishing to read a new book of theirs about Wireless Penetration Testing. Perfect to read on a sunny beach.

As this book is directed towards beginners I tried to read and review it with beginners eyes. Like their other book I was positively surprised to see a name I knew. The author Vivek Ramachandran not only gave a Wireless Pentesting training at BruCON, but is also known for his work on wireless security.

Content
The book has nine chapters starting with info how to build your lab, and what kind of hardware is required to more advanced attacks like Mis-Association, Caffe Latte, and breaking WPA-Enterprise.

I wouldn't compare this book to a standard book you read, because this book would be more a training manual teaching you some (basic) theory and then giving you lab exercises (or vice-versa). This is a great thing for geeks like me that remember by doing, and not by reading.

The disappointing bit was the lack of cryptographic theory. I think it is rather important to not only learn to use a tool with its command line options, but it's also important to know what the differences are between PTW and FMS attacks, and why it's possible to do ARP replays while the packets are encrypted. (Answer: because an ARP packet has a fixed length it can be recognized even being encrypted.)

As I am more experienced half of the book was a quick read, however the second half was a lot more pleasing as it taught me things I didn't know. (or forgot because of a lack of practice)

Conclusion
If you don't have experience with Wireless Cracking/Penetration Testing this book is definitely a must-read. I do advice however that you open Wikipedia and the site of Aircrack when reading trough WLAN Encryption Flaws (Chapter 4) to better understand the cryptographics.
Don't forget to buy a wireless card supporting monitor mode and packet injection while ordering this (e)book.

If you want to read a bit have a look at the free sample chapter.

RTBF TV Series downloader

Some time ago I wrote a simple script to automagically download TV episodes from the "revoir" functionality from the website of the RTBF.
That first script was rather unstable, so I analyzed the HTTP flow occurring while playing a video manually and wrote a lot more stable script that seems to work for some time.
The rtbf_tv_series_downloader.py script is available on a github repository.

How is it working?
  1. The XML feed with the latest episodes is fetched.
  2. From that file the unique id is extracted.
  3. That unique id is used to download the JSON file for that episode.
  4. In that JSON file a full download url is available.
  5. That file is downloaded and saved to the disk. Only if it was not yet on the disk.

Monday, June 27, 2011

Python global variables

Some things in python are weird, especially when considering global variables. Let's take the following code where we define two global variables (string and dict) and change their value inside the function.
dictionaryVar = {'A':"original"}
stringVar = "original"
globalStringVar = "original"

def aFunction():
    global globalStringVar
    dictionaryVar['A']="changed"
    stringVar = "changed"
    globalStringVar="changed"
    return dictionaryVar, stringVar, globalStringVar

print "Output of the function is:"
a = aFunction()
print "Dictionary   : ",
print a[0]
print "String       : "+a[1]
print "Global String: "+a[2]
print "\nGlobal variables are now: "
print "Dictionary   : ",
print dictionaryVar
print "String       : "+stringVar
print "Global String: "+globalStringVar
And now when running the code we see the following output (Python 2.6.6) we see the following:
$ python tmp/foo.py 
Output of the function is:
Dictionary   :  {'A': 'changed'}
String       : changed
Global String: changed

Global variables are now: 
Dictionary   :  {'A': 'changed'}
String       : original
Global String: changed
So the conclusion is:
  • Global strings changed in a function are returned correctly, and not changed outside the scope of the function. (expected)
  • Global dictionaries changed in a function are returned correctly, but they are also changed outside the scope of the function. (not expected)
  • Global strings, declared as global (in the function), changed in a function are returned correctly, and are also changed outside the scope of the function. (expected)

Tuesday, May 31, 2011

Book review: BackTrack 4: Assuring Security by Penetration Testing

Recently Packt publishing contacted me to ask me if I would like to review their BackTrack 4 book. Being an avid user of this distribution, and wondering what a book about BackTrack would look like, I accepted the offer.

A few days before BackTrack 5 came out the book finally arrived in my mailbox. As I also had the opportunity to play with BackTrack 5 during the time I read the book, I should be able to see how useful it is now BT5 is out.

A suprise
A first surprise was when I read the first pages about the authors and reviewers. Peter Van Eeckhoutte, also known as corelanc0d3r (from Corelan Team), is one of the three reviewers of this book. Seeing his name in this book gave me a good feeling about what I was going to encounter. (no no, it's not because he's Belgian)

Content
The book is divided into twelve chapters, with the first chapter an introduction to the BackTrack distribution, the various forms, how to configure the basics, update the system and make your own version of the live CD. The second chapter (free sample) gives an overview of various penetration testing methodologies, including the OSSTMM, ISSAF, OWASP, ... but also a BackTrack pentesting process in ten consecutive steps: Target Scoping, Information Gathering, Target Discovery, Enumerating Target, Vulnerability Mapping, Social Engineering, Target Exploitation, Privilege Escalation, Maintaining Access, and last but not least Documentation and Reporting

If you already used BackTrack before you will certainly recognize some of these names in the menu's of the BT4 menu ... and even more from the BT5 menu ... 

The next ten chapters first elaborates each step in some detail, to then dive into the real usage of each of the tools delivered with BT. So what options and arguments you need to do your job. This review won't go into detail into each chapter as it can be considered as an "enumeration of many tools". Many tools I already knew, but also many I discovered while reading.

At the end there's the very-much needed chapter about Documentation and Reporting ... a step often hated by techies. The book tries to convince you of the utility of your report and helps you by giving some tips and tricks with a sample table of contents to start with.

Downsides
Unfortunately no book is perfect and the thing that I really missed was a discussion of IPv6 tools, and examples with IPv6 IP addresses. Fortunately there's still that rather old Uninformed article from H D Moore to fill the gap.

Also be careful not to read the whole book at once, as your brain risks a buffer overflow if you do.

Conclusion
As this book is really focused on the BackTrack distribution the authors knew they wouldn't need to fill pages on how to install these hundreds of tools, but instead they could concentrate on explaining what every tool does and how to use them.
Of course you can't expect to have an extremely deep dive into each one of the tools, knowing that the book discusses around 100 of them. But they found a good equilibrium by going deeper with the more important tools available, with for example the five practical examples of exploitation with metasploit. (db_nmap, snmp scanner, vnc scanner, iis6 webdav attack, bind/reverse shell and meterpreter and msfpayload)

I already know what I'll do with this book: First put my name in it, then lend it to some friends who will certainly learn a lot from it and finally make sure I get it back (that's why I put my name in it) to use it as a later quick reference. An eBook version is available with a discount if you have the paper-version, and I'm hesitating to buy that one for the sake of mobility.

So if you're interested to buy the book, you can do that here.