Friday, July 24, 2015

Too many crashes

When I started focusing more on fuzzing this year, I never thought that I run into this kind of problems. But here I am now, complaining that my fuzzing produces too many crashes.
And I'm not even joking, for example last ~24h I fuzzed one pdf reader software only with ONE instance on ONE machine and the result was bit less then 300 crashes. In this kind of situation it is really impossible to do preliminary exploitability analysis manually (at least I don't have that time) so only possibility is to train the fuzzer to do some of the analysis automatically so I can easily put aside 98%-99% of the crashes that are not unique or don't have potential.

The filtering and sorting that my fuzzer does, is most well built out in Windows environment, where I use winappdbg library to get all the info I need. In Linux I wrote wrapper for gdb and in OSX I rely on it's own crash reporter application and read data out of it's logs (should but lot more effort to last two). So I will use Windows to describe my logic:

The sorting is built up as a directory tree:
  1. level: Close to NULL, Not close to NULL, bit both*
  2. level: Type of the issue (write, read, read from IP, unknown, heap corruption etc)
  3. level: Location of the crash (labeled if possible, otherwise last 2 bytes of the addres in HEX - because ASLR)
  4. level: Last 2 bytes (because ASLR) in hex from stack trace last 8-10 addresses (something like "34FC_322D_31FD_411A_3CC3_3CC3_3108_31DB")
  5. level: The crash files themselves with additional txt file that contains all the cool crash information


After lot of other ideas and tests, I chose this structure because it's both easy to quickly look at and most of the times it's enough to seperate crashes that happen in same place but have different original reasons. For example I can quickly look to "Not close to NULL" directory and see what type of stuff is there. If after day of fuzzing there is a "write" directory inside, then this makes me already happy because it gives hope for heap overflow or other memory corruption type of issue/-s. If I go into that directory,  I can get quick look of all the places that have caused incorrect memory write. If I go another level deeper from that, then I can see how many different stack paths were taken to any of them. And finally of course I have the files that caused the crash and txt files that have technical crash information.

Writing the code to do this kind of filtering is much easier then I thought at the beginning - even on Linux I can pipe in commands to GDB and pipe out the results, do some string analysis and get the data. In windows with WinAppDbg or PyDbgEng it's trivial and can be achieved with couple of hours of work (even by someone like me, for who Python is not second language).


*You might ask "How can crash happen in both near Null and not near Null" - well, with some applications I ran into situation where some crashes happened only when sun was exactly in right spot in the sky and never when I later opened the files that were reported by the fuzzer. My solution was to re-test every crash automatically right after first detection. And in some cases the crash happened during the re-tests also but in different location. And sometimes some of these locations were near null and others were not. This is the  situation where fuzzer gives them status "Bit both".

Thursday, July 2, 2015

June

Have not written new stuff for entire month now because almost whole June was filled with exams at the college and after that was the Estonia midsummer holidays. But still there were some successes from June also:
  • ZDI accepted one of my Adobe product finding ZDI-CAN-3002
  • Found 2 smaller vulnerabilities in Skype (not fixed yet, so I will write about them more deeply later)
  • Improved my fuzzer filtering and detection parts (still hope to make it public in august or september, but it has to be more easily usable before that)
  • In powerlifting training the 110kg raw paused benchpress felt relativly easy (for ~70kg bodyweight it's almost a good result)

Monday, May 25, 2015

CVE-2015-3200

Last week I noticed a issue in lighttpd server source code, that made it possible to do log injections. I notified the developers and it was decided that because this issue does not result in RCE or DOS, but only affects reliability of the logs, it is better to make it public. So here it is (still vulnerable, but now you know that logs might be tampered with).

CVE: CVE-2015-3200
Software: Lighttpd
Type: Log injection
Bug track link: http://redmine.lighttpd.net/issues/2646
Source code Location: http_auth.c:860
Vulnerable servers: Servers that use basic authentication
Description: When basic HTTP authentication base64 string does not contain colon character (or contains it after NULL byte - can be inserted inside base64 encoding), then that situation is logged with a string ": is missing in " and the simply decoded base64 string. This means that new lines, NULL byte and everything else can be encoded with base64 and are then inserted to logs as they are after decoding.

For example header "Authorization: Basic dGVzdAAKMjEwMC0wMS0wMSAwMDowMDowMDogKG1hZ2ljLmMuODU5KSBJVCdTIFRIRSBFTkQgT0YgVEhFIFdPUkxEIQ==" results in two log lines:
"
2015-05-14 12:55:54: (http_auth.c.859) : is missing in test
2100-01-01 00:00:00: (magic.c.859) IT'S THE END OF THE WORLD
"

On other subject: Does anyone know place in xkcd.org that requires basic authentication?

Monday, May 18, 2015

Foxit fuzzing ended

Because rebuilding my fuzzing machine took more time then predicted, I will not fuzz Foxit more and the results that I described the last time, are the final results (I do some additional analysis and then send them to the developers - it seems to be, that 15 unique crashes/memory corruptions are the final result after removing as many overlapses as I can with brief analysis). Also because my fuzzing environment changed (I used 32bit VMs and now use 64bit main machine directly), I can't also continue calculating the code coverage - the differences in the OS has created additional coverage paths and this would not give information I need for doing exact statistics.
With next filetype, I will do all the testing on the same environment and then hopefully will get better statistics up until the "end".

I will now continue using these 727 PDFs for testing other software and hopefully it will be as successful as it was with Foxit - it was surprising to get as many crashes/memory corruptions with only simple fuzzing on one home machine

Thursday, May 14, 2015

SITREP on Foxit fuzzing

Here comes the results of the half way through Foxit fuzzing (have to do a couple day pause, because there is some electricity work done nearby and I don't want to keep stuff running):

Total time: Around 1.5 weeks
Total number of crashes: 1699
Total number of testcases: ??? (when I was away, the main machine was shut down because of electricity works nearby, so I do have crash reports but not total number of tests done)
Unique crashe signatures: 23
Most probably not exploitable: 15 (NULL pointers and connected stuff)
Might be exploitable: 2 (one that seems to be arbitary write and one heap corruption)
No idea yet: 6 (some really strange crashes among them but some seem to be endless recursion type of stuff)


I also continued downloading new pdf files to test the predictions from last post. Have downloaded about 60K by now, need around 100K more.

Saturday, May 2, 2015

Fuzzing prep

For last 2 weeks I have done code coverage work on 366K pdf-s that i downloaded. As a base for code coverage I used Foxit pdf reader (it's a single exe file - much simpler to break apart in IDA and find all basic code blocks to use for monitoring - simple tracing is too slow) and as a tool I used my own scripts that I built using Python and WinAppDbg.
Explaining all of the work of finding pdf-s, writing (and optimizing - very important when doing stuff on one home machine!) code coverage tool and finding smallest subset of files, would be too long of a post to write for me now - but I thought that some of the statistics would be interesting

BASE INFO
Base software: Foxit
Executable and dlls: Single exe file, size ~47MB
Basic code blocks found: 611927 (using IDA and my IDAPython script)
Files covered: 366027
Code blocks covered: 133661 (21.8%)
Final subset of files: 727 (0.2%)
Machines used: 5VM, each running single instance (sadly it was most stable solution when tested - have to try some other approches because it's just waste of resources)
Time cost: ~2 weeks
My own time spent (not counting tool developement time before): maybe couple of hours total. Tools did not crash or stop working even once (damn proud of it)


STATISTICS (taken during the process)



ANALYSIS
It's clear that I should have downloaded more files to get as good code coverage as possible with this method. The addition of new files to the resulting list did not stop even in the final patch - it was still 0.29 new files per 1000 input files covered. That means that for about every 3450 pdf files analyzed, I got one additional file to my final set. If the graph would be trusted, this trend should end in somewhere between 400K and 500K files. I will test this when I have downloaded additional files. But up until that, I will now start fuzzing the 727 files that I resulted with - let see what happens.


FIRST FUZZING RESULTS (first 10 hours of fuzzing Foxit)
All together 41 crashes and 10 unique ones(based of my tool that sorts by type and relative-EIP):
  • 1 unique writeAV - could be exploitable but quick glance did not strengthen that opinion
  • 6 unique readAV - all of them close to 0, so probably not exploitable
  • 1 unique readAV where it tries to read from address 0xBAADF00D, so uninitialized allocated heap (DEBUG version of HeapAlloc) content was used for pointer. Could be interesting
  • 2 unique crashes caused by unknown execptions that were not caught by the handlers - did not have time to investigate further

Tuesday, April 28, 2015

Stored XSS in ebay messages filenames

I have been quite ethical hacker/pentester so far and have disclosed everything always responsibly (with or without bug bounty programs), so it's actually quite fun for me to do a first full disclosure (of sort).

Everything started more then year back, when I was looking around in many web applications and reporting everything that I found. Things were good, some of the times there was monetary benefit, sometimes I got some free stuff and sometimes I got to the "hall of fame" or simple "thank you" - overall reaction was nothing but great. Only company that's behaviour was bit different, was the ebay.
I discovered the vulnerability where attacker can do XSS attack over the ebay-s internal messages and since the session cookies in ebay are not HTTPonly, it was a quite high issue for targeted attacking.
When I reported this, I got the basic email back, about how much they value the security and so on. They asked not to disclose the issue to public (normal request) but then also added that they will not give me any information about when or how the issue will be fixed. I thought that this is kind of strange but to hell with that - as long as they fix it in normal time, I don't care.

3 months passed, no information from them and out of curiosity I checked the issue again. It was still there. Because the issue was simple "missing encoding" (usually quite quick fix), I contacted them and only response that I got was that they will not give any information about the fix time schedule.
Exactly same status was after 5 and 7 months (vulnerability still there and response to my email was same)

After that I pretty much forgot about it. I had much to do, so ebay was the last thing I cared about. Up until yesterday when during the skype chat, someone mentione Yahoo bug bounty case (https://grahamcluley.com/2013/09/serious-yahoo-bug/) and I remembered the ebay again.

So today I logged into the ebay and tried to replicate the issue (more then year later!) - it was still there. So it must not be as dangerous as I thought and no harm can happen from making it public


1. Start by sending message to someone other (pick the "This is not about an item")



2. Select "attach photos" functionality and upload the picture (my upload was monkey.jpg) - catch the request itself with burp (or some other proxy)



3. Modify the GET parameter named "picfile" and header named "X-File-Name" to contain your payload(mine was </script><script>alert('XSS')</script>)



4. If everything went well, you get something like this and you can submit the request (after filling captcha and other stuff) - catch request again with proxy



5. I'm not sure, that this is "MUST BE", but I modified file name also in this request


RESULT: When target opens the message, the result he/she gets is like this



QUICK ANALYSIS: Where exactly the payload is inserted
The filename is used inside message html in 2 places. First is the place it's displayed (encoded correctly).


The second is inside the javascript - there is no encoding used




Impact of this vulnerability (my opinion)
There are many things that make this issue dangerous. This is kind of short list about some of them:
  • You can create new users very easily to make these attacks (no email verification)
  • Target even gets a email about your message
  • Only 3 cookies are HTTPonly in ebay, none of them are needed for session hijacking
  • There seems no limiting factors for XSS payload (there might be length limits but this is easy to bypass)
  • Combining in with other stuff like http://www.securityfocus.com/archive/1/533361 (that is also still working in ebay!)