Gootloader infection cleaned up

Dear blog owner and visitors,

This blog had been infected to serve up Gootloader malware to Google search victims, via a common tactic known as SEO (Search Engine Optimization) poisioning. Your blog was serving up 200 malicious pages. Your blogged served up malware to 0 visitors.

I tried my best to clean up the infection, but I would do the following:

  • Upgrade WordPress to the latest version (one way the attackers might have gained access to your server)
  • Upgrade all WordPress themes to the latest versions (another way the attackers might have gained access to your server)
  • Upgrade all WordPress plugins (another way the attackers might have gained access to your server), and remove any unnecessary plugins.
  • Verify all users are valid (in case the attackers left a backup account, to get back in)
  • Change all passwords (for WordPress accounts, FTP, SSH, database, etc.) and keys. This is probably how the attackers got in, as they are known to brute force weak passwords
  • Run antivirus scans on your server
  • Block these IPs (5.8.18.7 and 89.238.176.151), either in your firewall, .htaccess file, or in your /etc/hosts file, as these are the attackers command and control servers, which send malicious commands for your blog to execute
  • Check cronjobs (both server and WordPress), aka scheduled tasks. This is a common method that an attacker will use to get back in. If you are not sure, what this is, Google it
  • Consider wiping the server completly, as you do not know how deep the infection is. If you decide not to, I recommend installing some security plugins for WordPress, to try and scan for any remaining malicious files. Integrity Checker, WordPress Core Integrity Checker, Sucuri Security,
    and Wordfence Security, all do some level of detection, but not 100% guaranteed
  • Go through the process for Google to recrawl your site, to remove the malcious links (to see what malicious pages there were, Go to Google and search site:your_site.com agreement)
  • Check subdomains, to see if they were infected as well
  • Check file permissions

Gootloader (previously Gootkit) malware has been around since 2014, and is used to initally infect a system, and then sell that access off to other attackers, who then usually deploy additional malware, to include ransomware and banking trojans. By cleaning up your blog, it will make a dent in how they infect victims. PLEASE try to keep it up-to-date and secure, so this does not happen again.

Sincerly,

The Internet Janitor

Below are some links to research/further explaination on Gootloader:

https://news.sophos.com/en-us/2021/03/01/gootloader-expands-its-payload-delivery-options/

https://news.sophos.com/en-us/2021/08/12/gootloaders-mothership-controls-malicious-content/

https://www.richinfante.com/2020/04/12/reverse-engineering-dolly-wordpress-malware

https://blog.sucuri.net/2018/12/clever-seo-spam-injection.html

This message

Belgian CHI Papers

It’s that time of the year again. This week, the annual CHI conference is taking place in Austin, Texas. While I’m not attending, I do try to follow the program somewhat through Twitter.

CHI is generally considered to be the most prestigious conference on Human-Computer Interaction, with acceptance rates between 20 and 25% (CHI currently has an overall acceptance rate of 23%: 2,930 papers were accepted out of 12,583 submissions). HCI research labs and individual researchers are often compared based on their track record for CHI (or the lack thereof). For example, Professor Jan Borchers of RWTH Aachen maintains a ranking of German universities based on the number of CHI archival publications (full papers or notes) since 2003.

Unfortunately, participation of Belgian universities and companies at CHI tends to be rather limited, especially with respect to these archival publications. Our lab, for example, got work-in-progress papers accepted before (e.g., Telebuddies), co-organized CHI workshops (e.g., User interface description languages for next generation user interfaces), and had people in the organizing committee, but we did not (yet) get a full paper or note accepted to CHI. I must say we don’t always try every year, though. On the other hand, we do publish at more specialized conferences, such as UIST (D-Macs), Pervasive (Situated Glyphs), 3DUI (Vanacken et al.), Tabletop/ITS (FluidPaint [video]), PERCOM (Pervasive Maps), INTERACT (Haesen et al.), MobileHCI (Luyten et al.), AVI (Gummy), and EICS (CAP3). Some of these more specialized conferences are considered to be as competitive and prestiguous as CHI. This is especially the case for UIST, but also for CSCW, DIS and Ubicomp & Pervasive (for the related area of ubiquitous computing). Since other scientific disciplines (e.g., physics, biology) are mostly focused on journals instead of conferences, some people explicitly mention the importance of top HCI conferences such as CHI and UIST in their resume.

What about other Belgian universities? In 2009, dr. David Geerts from CUO (KULeuven) had a full paper accepted to CHI. This was the first CHI archival publication from a Belgian institution in years. At that same edition of CHI, two researchers from Eindhoven University of Technology (TU/e) presented a scientometric analysis of the CHI proceedings up until 2008. Their analysis seems to indicate that the paper by Geerts was just the second Belgian archival paper at CHI. Indeed, Belgium has exactly 1 credit in the main proceedings up until 2008:

As far as I can tell, this refers to the paper at INTERCHI ‘93 by Jean Vanderdonckt and François Bodart of the Université Catholique de Louvain. Note that INTERCHI ‘93 was in fact a joint INTERACT+CHI conference (it was also the first CHI conference that was held outside North America).

Belgium’s neighbouring countries do a lot better in the analysis: the Netherlands have 17.17 credits, France has 27.03 credits and Germany is a clear winner with a score of 39.74 in the main proceedings. Belgium’s total number of credits per million inhabitants (which includes credits for extended abstracts — non-archival publications) is a bit higher than that of France, though (1.78 vs. 1.34).

Fortunately, the situation seems to be improving. Last year, KULeuven had another 2 archival papers accepted to CHI 2011: a note by Geerts, and a full paper by Karl Gyllstrom. This year, there is a note co-authored by Anand Ramamoorthy from the University of Ghent. Steven Houben, an UHasselt alumnus (and one of my former Master’s thesis students) who is now working on a PhD in Jakob Bardram’s group, got a CHI 2012 full paper accepted too (congrats again, Steven!). Of course, there’s the question of what really constitutes a Belgian CHI paper. Is it enough if the paper is (co-)authored by researchers employed by a Belgian institution, or do the authors have to be Belgian? While Karl Gyllstrom and Anand Ramamoorthy are affiliated with Belgian universities, they are not Belgian citizens (as far as I can tell). On the other hand, while Steven is a Belgian citizen, he is not affiliated with a Belgian university or company.

This made me wonder if there were any other Belgians working abroad who ever co-authored papers at CHI. I could only think of Professor Pattie Maes (VUB alumna) who directs the Fluid interfaces group at MIT Media Lab (she currently has 4 CHI papers according to DBLP). I would love to hear about other people that I might have missed.

To conclude, there is certainly room for improvement, although we’re not doing that bad either. Let’s hope the HCI community in Belgium continues to grow and Belgium will eventually be as well represented at top HCI venues as our neighbouring countries.

Reflecting on Participating in the Vlaamse Programmeerwedstrijd

Last Wednesday, I participated in the fourth edition of the Vlaamse Programmeerwedstrijd (VPW), hosted by PHL in Hasselt. The VPW is a Flemish programming contest inspired by the ACM International Collegiate Programming Contest (ICPC). Participants compete in teams of 3 and are divided into three different categories: high school students (category I), undergraduate students (category II), and graduate students or professionals (category III). Teams get 3 hours to solve 5 problems in one of 12 supported programming languages, using a single computer.

My team competed in category III and consisted of myself and two colleagues. Our main motivation was fun, and curiosity of how well (or bad) we would do. Also, since I was encouraging students to participate, it wouldn’t hurt to set an example by participating myself too.

Or so I thought Our team eventually did not do well, as we only managed to correctly solve 1 out of 5 exercises. There was a big gap between the top 5 and the other teams in our category. Only the teams ranking 1st and 2nd correctly solved all problems. The 3rd team solved 4 problems, the 4th and 5th ranked teams solved 3 problems, and from then on teams only solved 2 problems or less (four teams didn’t solve any problems). All except 1 team in the top 5 already participated in this contest before. This year’s winner actually ranked 21st last year in the Northwestern European Regional Programming Contest (NWERC), which is a European contest that serves as a qualifier for the ACM ICPC World Finals. By the way, kudos to the two teams of Master’s students from our university who managed to rank 7th and 9th (and beat us too).

Nevertheless, I had a lot of fun participating. I think I have a better idea now of what matters in this kind of contest and how we could do better in the future. Here is a quick list of factors that I feel are important:

Speed

You have to work really fast. Teams get 3 hours for 5 problems, which comes down to 36 minutes per problem. That includes reading the problem description, thinking about the solution, actually coding it up and verifying that it works!

You can bring notes, books and existing code along (although, obviously, you are not allowed to use the web), so it makes sense to build your own library of useful algorithms and data structures which you can use, either written down on paper or available in code. This includes things like graphs, trees, DFS, BFS, backtracking, the Sieve of Eratosthenes, Dijkstra and Floyd-Warshall. For example, we skipped one of the problems since we didn’t have a well-tested data structure available for it. Of course, you should also understand these algorithms and data structures and know when to use them.

Since speed is important, this means you have to be proficient in the language you’re using. You don’t have time to look up how a certain feature works, so make sure you know how to read input (obviously), how to work with built-in data structures (e.g. lists, strings, dictionaries, stacks, tuples) and how to perform common operations (e.g. copying containers, mathematical operations on numbers, replacing characters in strings, regex matching). We used Python, which has a few nice features such as built-in eval (useful for evaluating strings of mathematical expressions with the right operator precedence), easy I/O, list comprehensions, built-in combinations and permutations functions, while also being easy to learn. I heard that most of the top-scoring teams were using Haskell.

Know about Time Complexity

This is really essential. You have to be able to quickly assess the time complexity of your algorithm and decide whether it will good enough. Most problem descriptions include upper bounds on their inputs, which you should take into account (even if the example input is very small). This helps you to discard inefficient algorithms, and gives you more time to work out a better one. Most programs will have to perform well on large inputs, which means you will usually have to do better than O(N^2).

We made this mistake twice, because we were stressing over the amount of time left and wanted to quickly code up a working solution. While you can optimize your code, you don’t have time to change your algorithm when you have just finished a working (but slow) solution. With about 25 minutes left, we had a working version for one of the problems, but didn’t succeed in optimizing that solution further to avoid exceeding the time limit for the contest input. So eventually, we didn’t get any points for our solution. What we should have done instead was think about the time complexity of our algorithm first, and then discard it when we would have noticed that it didn’t scale.

Moreover, C, C++ (and Java) have an advantage over scripting languages in terms of performance, since a suboptimal algorithm in C/C++ (e.g. O(N^2) instead of O(NlogN)) might get very close to the time limit but could still be fast enough. The same algorithm implemented in a scripting language will almost always be too slow. If you’re programming in Python, just try running your code with CPython and PyPy to see what I mean. Here’s an example of running naive matrix multiplication (takes cubic time) of two randomized 500×500 matrices in CPython versus PyPy:

$ time python n3matrix.py 500
real	2m30.598s
user	2m30.189s
sys	0m0.068s
$ time pypy n3matrix.py 500
real	0m13.244s
user	0m13.137s
sys	0m0.064s

The PyPy version runs in 13 seconds and would probably not exceed the contest time limit, while the CPython version takes a whopping 150 seconds! Even though there are great libraries for matrix operations in Python (hello NumPy!), this still matters since most programming contests don’t allow the use of external libraries.

Here’s a similar example of sorting a list of 10.000 elements using bubble sort (a quadratic algorithm):

$ time python n2sort.py 10000
real	0m38.837s
user	0m38.730s
sys	0m0.016s
$ time pypy n2sort.py 10000
real	0m2.284s
user	0m2.192s
sys	0m0.036s

Of course, one could also argue that scripting languages allow you to implement and debug a solution more quickly than C/C++, but I guess it all depends on how fluent you are in your language of choice.

Teamwork

Although you work in teams of 3 people, you can only use one computer, which means only one person at a time can do the actual coding. You will have to divide tasks. For example, one team member could try to get a head start on the next problems, or could sort problems in order of difficulty to select the next problem to work on collectively. I think this actually worked quite well in our team.

In our case, it helped to have the fastest programmer/typist on the keyboard most of the time, although we did switch several times when someone had an idea they wanted to try. If you switch a lot, make sure you standardize somewhat on the setup you use, or at least allow easy switching between tools. We made sure we had different editors on the laptop set up to work in a way that the other team members were comfortable with. Because we used Python, we also had to standardize on the number of spaces used for indentation.

Practice and Study

Finally, the best way to get better is by practicing (exemplified by the number of top-scoring teams who also participated in previous editions of the contest). You can practice on problem sets from previous years or online problems sets (e.g. Project Euler, UVA, or SPOJ). Once you get better, it will probably also help to practice problems in contest conditions: as a team, with a time limit and just a single computer.

However, just practicing more probably won’t cut it. You are also going to need a good reference book on algorithms to look up problems, and suitable algorithms or heuristics for these problems. Since you won’t have enough time to read through books during the competition, you need to have this knowledge in your head (i.e. by putting the theory from the books into practice). I think you can only get better through a combination of theory and practice: studying different techniques and applying them in solutions to programming problems. Books on algorithms and data structures that I liked myself are: Skiena’s The Algorithm Design Manual, CLRS and Sedgewick’s Algorithms.

Of course, these are just my own observations, so comments are always welcome!

Getting up and running with the Kinect in Ubuntu 12.04

I’m currently experimenting a bit with the Kinect depth camera, which I intend to use for a prototype. I’ve played around with the device before on Windows using the Microsoft Kinect SDK. Setting up libfreenect on Windows is actually quite a pain, which was the reason why I started with the Microsoft SDK. As I wanted to code in Python, I’ve had a look at PyKinect and the Python Tools for Visual Studio (see also this PyCon US 2012 talk about building a Kinect game with Python). Unfortunately, the PyKinect samples seem to be outdated, and don’t work with the latest version of the Kinect SDK (version 1.0, released in February).

So I decided to see how far I would get with the Kinect and libfreenect on Ubuntu (12.04). Step 1: install the libfreenect demos to test if it works (this will also install libfreenect and its dependencies).

$ sudo apt-get install libfreenect-demos

So far so good. Unfortunately, the demo application wouldn’t start:

$ freenect-glview 
Kinect camera test
Number of devices found: 1
Could not claim interface on camera: -6
Could not open device

Strange.. I did another check to see if my user was in the plugdev group (which
was the case):

$ groups jo
jo : jo adm cdrom sudo dip plugdev lpadmin sambashare

Then I noticed that the Kinect’s LED kept blinking continously, which usually means there’s some sort of connection problem. It was correctly recognized, though:

$ lsusb
...
Bus 002 Device 007: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 002 Device 008: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
Bus 002 Device 009: ID 045e:02ae Microsoft Corp. Xbox NUI Camera

After a quick web search for the specific libfreenect-glview error message, I learned that recent versions of the Linux kernel prevent libfreenect from claiming the Kinect as they now include a driver to support the device as a regular webcam (see kinect.c), which is actually quite cool. This also means there should be a specific video device for the Kinect (/dev/video1 in my case).

The kernel modules are indeed loaded:

$ lsmod | grep -i gspca
gspca_kinect           12936  0
gspca_main             28366  1 gspca_kinect
videodev               98259  2 gspca_main,uvcvideo 

Let’s try playing the camera stream using GStreamer and Video4Linux:

$ gst-launch-0.10 v4l2src device=/dev/video1 ! video/x-raw-yuv ! ffmpegcolorspace ! xvimagesink

That seems to work!

The Kinect actually has two image sensors: an RGB camera and a depth sensor, which consists of an infrared laser projector and a monochrome CMOS sensor. A depth map is created by projecting structured infrared light and capturing the resulting image from the monochrome sensor. As I wasn’t sure how to grab an image from the monochrome sensor, I contacted the author of the kernel module (Antonio Ospite). He told me it’s possible to get a monochrome image by specifying an image size of of 640×488 pixels (instead of the usual 640×480). Note that the kernel module currently only supports video streams from the monochrome sensor as unprocessed output.

If we pass that specific width and height to GStreamer, we get this:

It’s also possible to get the full-size (1280×1024) monochrome image. However, most webcam apps will just show the video stream from the RGB sensor for that resolution as you can’t specify that you want the specific Y10B format. To do that, you can use a separate program like qv4l2, which can be installed as follows:

$ sudo apt-get install qv4l2

To get the full-size monochrome image, run qv4l2, open the /dev/video1 device in raw mode (File > Open raw device), select 1280×1024 for the frame size, and “Y10B – Y10B” for the capture format, and click the red capture button:

OK, back to running the libfreenect demo. Someone over at superuser.com suggested to remove the modules (so that the user-mode libfreenect driver could
take over) and run libfreenect-glview again.

And indeed, after removing both gspca modules, the freenect-glview demo did work:

$ sudo modprobe -r gspca_kinect 
$ sudo modprobe -r gspca_main
$ freenect-glview 
Kinect camera test
Number of devices found: 1
GL thread
Write Reg 0x0006 <= 0x00
Write Reg 0x0012 <= 0x03
Write Reg 0x0013 <= 0x01
Write Reg 0x0014 <= 0x1e
Write Reg 0x0006 <= 0x02
Write Reg 0x0005 <= 0x00
Write Reg 0x000c <= 0x00
Write Reg 0x000d <= 0x01
Write Reg 0x000e <= 0x1e
Write Reg 0x0005 <= 0x01
...

The left side of the window shows a colored depth image, while the right side shows the RGB image from the camera. Of course, it would be quite cumbersome to remove the modules again every time you want to use libfreenect. One option is to blacklist the gspca module as follows:

$ echo "blacklist gspca_kinect" >> /etc/modprobe.d/blacklist.conf

Now that we've got that sorted out, I'll try out the libfreenect Python wrapper (or perhaps SimpleCV).

Update: Antonio Ospite pointed out in the comments that recent versions of libfreenect (0.1.2) can automatically detach the kernel driver. There's a PPA available by Florian Echtler with updated libfreenect packages for Ubuntu 12.04.

Lenovo ThinkPad X121e

I am writing this on my new laptop: a Lenovo ThinkPad X121e:

Although my employer is so generous to provide me with a laptop for work purposes, I also wanted a new laptop at home for personal use. My only personal machine right now is my aging (and quite slow) 17 inch Apple Powerbook G4. Since most Apple software currently requires Intel processors, I can’t really upgrade it either. I also wanted something a bit more portable (no heavy, bulky 15-incher) with better battery life, which would be great for travelling.

While my work laptop runs Windows 7, I really miss having a UNIX environment available at times. I worked almost exclusively on Linux since 2003, starting out with Mandrake, moving from that to Gentoo and then to Debian to eventually settle on Ubuntu. A few years ago, I decided to move back to Windows due to bad hardware support on Linux (especially for newer laptops) and annoying bugs in Ubuntu. Nevertheless, I still prefer a UNIX environment for programming, and I love the power of the shell and how you can quickly chain commands together to produce something useful.

At first, I was thinking of just getting a new Apple laptop. Indeed, OS X is also based on UNIX but does have great hardware support and a nice UI. Furthermore, Apple builds high-quality hardware (I loved my Powerbook), and they sell the ultimate ultraportable laptop: the Macbook Air.

Unfortunately, I quickly discarded that idea when I realized again how expensive Apple laptops are. I didn’t really feel like shelling out about 1200 EUR just to get a Macbook Air with a decent amount of RAM and an SSD bigger than 64 GB. So I started looking around for cheaper alternatives. Instead of complaining about Ubuntu not supporting certain hardware, I decided to look for hardware that would Just Work (TM) on Ubuntu. There are also some things I didn’t like that much on OS X, such as the lack of a well-integrated package manager. At FOSDEM, I remember seeing most developers work on either Apple machines or IBM (now Lenovo) ThinkPads. Some people even used to say that Apple and IBM ThinkPad laptops are the only laptops really worth considering. ThinkPads also come in an ultraportable X series, which I thought was worth looking into.

While the professional ThinkPad X series laptops are again quite expensive, Lenovo also has entry-level models, such as the X100e, X120e or the X121e. These models come with either an AMD Fusion or an Intel Core i3 processor. The nice thing about the Intel model of the X121e is the fact that it’s quite well supported in Linux, and has received nice reviews. So, I decided to buy myself one.

In summary, here’s what I like about this laptop, compared the Macbook Air:

  • Half the price. It costs around 630 EUR, and when purchasing a 53 EUR 2-year Next Business Day On Site Warranty, you get a 100 EUR cashback. That means you can get the laptop plus 2-year warranty for around 580 EUR, which is a great deal, in my opinion. The AMD version even starts from 400 EUR.
  • More ports (Ethernet, 3x USB, VGA, HDMI).
  • Better battery life (up to 9 hours according to Lenovo).
  • Customizability (it’s quite easy to replace the HDD, battery or add more RAM).
  • Built-in 3G (HSPA) WAN and GPS module (I really like this feature).
  • Comparable size and weight (11.6 inch and 1.3 kgs, about 250g heavier).
  • Comparable build quality (the laptop feels very sturdy).
  • Comparable keyboard quality (with TrackPoint).

Disadvantages are the slower processor (Core i3 versus the Macbook’s Core i5 or Core i7, but probably fine for my needs), the lower-quality display (same resolution but the Macbook Air’s is better, although I do prefer matte screens), slightly noisy fan at times, and the less sexy design (I myself actually quite like the look and feel of the device).

Oh, and it has a nice sleep animation (probably inspired by Apple’s own pulsing LED sleep animation):

Let’s see how installing Ubuntu goes on this one.