-- pcap2RawCLayers.py --
#!/usr/bin/python
try:
from scapy.all import *
except:
print "old way..."
from scapy import *
import sys
from binascii import *
if len(sys.argv) ==2:
print "Parsing "+str(sys.argv[1])
else:
print "Usage: python "+sys.argv[0]+" file.pcap"
exit(10)
pcap=rdpcap(sys.argv[1])
out=file(sys.argv[1]+".rawc","w")
out.write("// Generated from pcap2RawCLayers.py\n")
i=0
buff=""
arrays=[]
for p in pcap:
print "// packet "+str(i)+": ***"
while p.payload and len(p.payload) > 0:
q=p.copy()
q.payload = ''
bytes=len(q)
strbyte=""
for j in range(0,bytes):
if j %8 ==0:
strbyte = strbyte +"\n "
strbyte = strbyte + "0x" + str(hexlify(str(q)[j]))
if j < bytes-1:
if j+1 %8:
strbyte= strbyte + ","
else:
strbyte= strbyte + ", "
rawpkt=" rawpkt" + str(q.name) + "["+str(i)+"] = {" + strbyte + " }; /* end rawpkt" + str(p.name) +"["+ str(i) +"] */\n"
p=p.payload
arrays.append("rawpkt" + str(q.name))
buff = buff + rawpkt
if not p.payload and p.load:
q=p.copy()
bytes=len(q.load)
strbyte=""
for j in range(0,bytes):
if j %8 ==0:
strbyte = strbyte +"\n "
strbyte = strbyte + "0x" + str(hexlify(str(q.load)[j]))
if j < bytes-1:
if j+1 %8:
strbyte= strbyte + ","
else:
strbyte= strbyte + ", "
rawpkt=" rawpktPayload["+str(i)+"] = {" + strbyte + " }; /* end rawpktPayload["+ str(i) +"] */\n"
p=p.payload
arrays.append("rawpktPayload")
i=i+1
buff = buff + rawpkt
declares=""
for l in arrays:
declares = declares + " uint8_t *"+ l +"["+str(i)+"];\n"
filebuff = declares+ "\n"+ buff + "\n"
out.write(filebuff)
out.close()
print filebuff
print "//"+ str(i) +" packets written in "+sys.argv[1]+".rawc"
Friday, December 25, 2009
Improved version of pcap2rawc
Tuesday, December 22, 2009
Rule2Alert
I have started a new project with Josh Smith and Will Metcalf. Talking about scapy Josh told me if I would like to get involved in the project, and we created a google project called "rule2alert".
It's written in python and use scapy. The purpose of this project is to read snort compatible rules and write a pcap with packets that should match the rules. This can be used later to test NIDS like suricata and snort and detect problems on the detection plugins. Of course this needs a lot of development, for each rule keyword, so we don't think we will generate payloads for all the rules, but also for the majority of them. At the moment we deal with content and content modifiers, and also content hexa data specification, and flow options, performing TCP 3 way handshakes. The next steps will be focussed on http protocol options, like uricontent.
We hope it will be a good QA tool. If you would like to get involved, feel free to get in touch.
Monday, December 7, 2009
I must feel lucky...
"the 'impossible' happened"...
lol!
valgrind: m_scheduler/scheduler.c:1144 (vgPlain_scheduler): the 'impossible' happened.
valgrind: VG_(scheduler), phase 3: run_innerloop detected host state invariant failure
==28716== at 0x3802A7AC: report_and_quit (m_libcassert.c:140)
==28716== by 0x3802AABA: vgPlain_assert_fail (m_libcassert.c:205)
==28716== by 0x3804E283: vgPlain_scheduler (scheduler.c:1165)
==28716== by 0x38060CB0: run_a_thread_NORETURN (syswrap-linux.c:89)
Thursday, December 3, 2009
Ante la inclusión en el Anteproyecto de Ley de Economía Sostenible
Ante la inclusión en el Anteproyecto de Ley de Economía sostenible de modificaciones legislativas que afectan al libre ejercicio de las libertades de expresión, información y el derecho de acceso a la cultura a través de Internet, los periodistas, bloggers, usuarios, profesionales y creadores de internet manifestamos nuestra firme oposición al proyecto, y declaramos que…
1.- Los derechos de autor no pueden situarse por encima de los derechos fundamentales de los ciudadanos, como el derecho a la privacidad, a la seguridad, a la presunción de inocencia, a la tutela judicial efectiva y a la libertad de expresión.
2.- La suspensión de derechos fundamentales es y debe seguir siendo competencia exclusiva del poder judicial. Ni un cierre sin sentencia. Este anteproyecto, en contra de lo establecido en el artículo 20.5 de la Constitución, pone en manos de un órgano no judicial -un organismo dependiente del ministerio de Cultura-, la potestad de impedir a los ciudadanos españoles el acceso a cualquier página web.
3.- La nueva legislación creará inseguridad jurídica en todo el sector tecnológico español, perjudicando uno de los pocos campos de desarrollo y futuro de nuestra economía, entorpeciendo la creación de empresas, introduciendo trabas a la libre competencia y ralentizando su proyección internacional.
4.- La nueva legislación propuesta amenaza a los nuevos creadores y entorpece la creación cultural. Con Internet y los sucesivos avances tecnológicos se ha democratizado extraordinariamente la creación y emisión de contenidos de todo tipo, que ya no provienen prevalentemente de las industrias culturales tradicionales, sino de multitud de fuentes diferentes.
5.- Los autores, como todos los trabajadores, tienen derecho a vivir de su trabajo con nuevas ideas creativas, modelos de negocio y actividades asociadas a sus creaciones. Intentar sostener con cambios legislativos a una industria obsoleta que no sabe adaptarse a este nuevo entorno no es ni justo ni realista. Si su modelo de negocio se basaba en el control de las copias de las obras y en Internet no es posible sin vulnerar derechos fundamentales, deberían buscar otro modelo.
6.- Consideramos que las industrias culturales necesitan para sobrevivir alternativas modernas, eficaces, creíbles y asequibles y que se adecuen a los nuevos usos sociales, en lugar de limitaciones tan desproporcionadas como ineficaces para el fin que dicen perseguir.
7.- Internet debe funcionar de forma libre y sin interferencias políticas auspiciadas por sectores que pretenden perpetuar obsoletos modelos de negocio e imposibilitar que el saber humano siga siendo libre.
8.- Exigimos que el Gobierno garantice por ley la neutralidad de la Red en España, ante cualquier presión que pueda producirse, como marco para el desarrollo de una economía sostenible y realista de cara al futuro.
9.- Proponemos una verdadera reforma del derecho de propiedad intelectual orientada a su fin: devolver a la sociedad el conocimiento, promover el dominio público y limitar los abusos de las entidades gestoras.
10.- En democracia las leyes y sus modificaciones deben aprobarse tras el oportuno debate público y habiendo consultado previamente a todas las partes implicadas. No es de recibo que se realicen cambios legislativos que afectan a derechos fundamentales en una ley no orgánica y que versa sobre otra materia.
Este texto se publica multitud de sitios web. Si estás de acuerdo, publícalo también en tu blog, twittéalo, facebookéalo.
Friday, November 27, 2009
Profiling with shark on Mac OS X - Snow Leopard
How to do profiling with Mac OS X.
The version I'm running is Mac OS X 10.6.1 (Snow Leopard). I have installed the Development toolkit X-code and the typical tools like autoconf, automake, etc. You may also need to run it as root depending on your configuration. Mac OSX provide a great program for profiling on different languages. It's called "Shark".
1. To run it type cmd+space bar and type Shark (Spotlight search is useful).
2. Now that you have Shark opened select "Launch" at the last combo (It list all the running processes, and you could also profile everything, but we are interested only on our app).
3. Now you can click on "Start", and a new dialog box will ask you for the executable path, and the arguments. Set them as you need. You may also set environment vars for debugging or whatever.
4. Now just click Ok and wait to see the results.
Now that the results are generated, you can order them by each of the columns "Self" and "Total" will indicate you the cost of each part, and this will give you the most critical sections. They can be simple code, but maybe it's executed a lot of times. So even appering very simple, if you can improve that part just a bit, you will win on performance!
Now you can check some features of Shark. You can expand in a hierarchical list, the traces of the function calls, and also the self/total cost% will be splitted into each function call, so you may see a grouped call with 24% but expanding it, you may have one call that use 10% another 5% and another 9%. This way you can get a draft in your mind of the execution flow. It is also able to show the profile by threads, and the call stack as "Heavy", "Tree" or both
If you don't have the source code, don't worry. Shark disassemble the application for you, lol!
So now you may think you need to check the code of that part. Shark will also help you on that. Double click on the function call and the code will be displayed automatically. The cpu cost will appear with each important line.
The rest is at your logic. You may need to determine where a loop can be costing a lot, or a performance improvement can be done. You can also have a look at the generated charts. The id of the cpu can be also specfied here.
Tuesday, October 27, 2009
pcap2rawc.py
#!/usr/bin/python
# File: pcap2rawc.py
# Pablo Rincon Crespo [pablo.rincon.crespo at gmail]
#
try:
from scapy.all import *
except:
print "old way..."
from scapy import *
import sys
from binascii import *
if len(sys.argv) ==2:
print "//Parsing "+str(sys.argv[1])
else:
print "Usage: python "+sys.argv[0]+" file.pcap"
exit(10)
pcap=rdpcap(sys.argv[1])
out=file(sys.argv[1]+".rawc","w")
out.write("// Generated from pcap2rawc.py\n")
i=0
for p in pcap:
i=i+1
print "//processing packet "+str(i)+": ***"
print p.command()
bytes=len(p)
strbyte=""
for j in range(0,bytes):
if j %8 ==0:
strbyte = strbyte +"\n "
strbyte = strbyte + "0x" + str(hexlify(str(p)[j]))
if j < bytes-1:
if j+1 %8:
strbyte= strbyte + ","
else:
strbyte= strbyte + ", "
rawpkt=" uint8_t rawpkt" + str(i) + "[] = {" +strbyte + " }; /* end rawpkt" + str(i) +" */\n"
print rawpkt
out.write(rawpkt + "\n")
out.close()
print "//"+ str(i) +" packets written in "+sys.argv[1]+".rawc"
Saturday, September 19, 2009
Snort ( 2.8.* < 2.8.5stable) Unified1 output bug
Here is the advisory :)
Advisory:
=========
Snort unified 1 IDS Logging Alert Evasion, Logfile Corruption/Alert Falsify
Log:
====
30/06/2009 Bug detected.
20/07/2009 First mail with snort team.
20/07/2009 Snort team answer they will fix it in the next release (2.8.5).
16/09/2009 Snort 2.8.5 released, bug fixed.
Affected Versions:
==================
snort-2.8.1
snort-2.8.2
snort-2.8.3
snort-2.8.4
snort-2.8.5.beta*
Discussion:
===========
snort-2.8.* is susceptible to a Denial Of Service Vulnerability with Snort unified 1 binary format.
It occurs when snort.conf use the classic unified 1 output configuration as follows:
output unified: filename snort.log, limit 128
and Stream5 preprocessor is enabled.
This issue is due to the application's failure to properly set the offset of a memory buffer write when logging packet rebuilt streams data, resulting in corrupted unified log header and data in the logfile, out-of-bounds offsets, making impossible to parse/view the generated alerts with a normal parser/alert frontend.
When an alert has packet data information (the raw packet) the function UnifiedLogStreamCallback() write the raw packet data overwriting the UnifiedLog header, that has the type and size of the alert, followed by the alert information.
--output-plugins/spo_unified.c line 803 at least in snort-2.8.4 function UnifiedLogStreamCallback()
-------->SafeMemcpy(write_pkt_buffer, packet_data,...
should be
-------->SafeMemcpy(write_pkt_buffer + offset, packet_data,...
With this bug, the alert type and size are overwritten with the MAC addresses of the raw packet, so with malformed packets (Eth/IP/TCP/Data with modified MAC addresses), the size and the type (and other information) can be set falsifing alerts for a later parsing process. If an attacker build malformed packets, so an alert is falsified the size is bigger than 128M (the unified log limit size by default), snort will continue inserting alerts in the file, but a parser when reading that alert will try to jump 128M skipping the alerts inserted after the falsified one.
An attacker can also insert a complete list of falsified alerts malforming packets, because the raw packet has TCP data that you can fill with falsified UnifiedLog alert structures (with the binary data), so would need to adjust the packet headers to set the "size of the alert" (overwritten with the MACs of the packet) making that the parser read the next alert in the offset that the TCP data will overwrite(the list of falsified alerts).
Impact:
=======
With this bug an attacker can break the alert log headers, making impossible for a parser to extract the alert information correctly. An attacker can also insert falsified alerts in the logfiles by injecting unified structures with false alerts, false pcaps (Ethernet/Ip/Tcp/Data) by malforming the packets of a TCP stream that match a normal alert, which wont be even correctly inserted.
Proof of concept:
=================
To reproduce the bug you must have a unified 1 parser accepting unified logs with the configuration in snort.conf as " output unified: filename snort.log, limit 128" and the Stream5 preprocessor enabled. Then you need to send a content payload that will generate an alert, but this payload must be divided in two parts, sending them in two consecutive (and different) packets so Stream5 will reassemble as a PKT_REBUILT_STREAM. The header of the unified Alert log will be overwritten with the raw packet information. There are two proof of concept scapy scripts, one generate a pcap that insert an alert overwriting the header so that a parser think that the alert is bigger than 128M, and another that insert a falsified alert.
The pcaps can be processed in snort with snort -r "the_file.pcap"...
Fix:
====
Install snort-2.8.5 or add the offset and recompile snort:
--output-plugins/spo_unified.c line 803 at least in snort-2.8.4 function UnifiedLogStreamCallback()
SafeMemcpy(write_pkt_buffer + offset, unifiedData->logheader,
sizeof(UnifiedLog), write_pkt_buffer,
write_pkt_buffer + sizeof(DataHeader) +
sizeof(UnifiedLog) + IP_MAXPACKET);
offset += sizeof(UnifiedLog);
unifiedData->data->current += sizeof(UnifiedLog);
if(packet_data)
{
-------->SafeMemcpy(write_pkt_buffer, packet_data,
offset + unifiedData->logheader->pkth.caplen,
write_pkt_buffer, write_pkt_buffer +
sizeof(DataHeader) + sizeof(UnifiedLog) + IP_MAXPACKET);
if(fwrite(write_pkt_buffer, offset + unifiedData->logheader->pkth.caplen,
1, unifiedData->data->stream) != 1)
FatalError("SpoUnified: write failed: %s\n", strerror(errno));
unifiedData->data->current += unifiedData->logheader->pkth.caplen;
}
else
--+ 825
Look at that closely and you'll see that the buffer is overwritten if packet_data is not 0, and then the buffer is written to the log file. The fix is really simple. Just write to write_pkt_buffer +offset, instead of write_pkt_buffer.
-------->SafeMemcpy(write_pkt_buffer, packet_data,...
-------->SafeMemcpy(write_pkt_buffer + offset, packet_data,...
or use unified 2.
Conclusions:
========
An attacker can:
1. Corrupt the log files.
2. Perform attacks after malformed packets in order prevent that they would get
logged/displayed.
3. Make a DOS for the parsers by inserting alerts with header size > than the filesize limit
(They would loose a lot of alerts...).
4. Insert a complete falsified attack session by encapsulating many alerts in the malformed
tcp packets.
5. We can patch it and be happy with our systems using Stream5 and the rest of the
preprocessors :)
6. It's extremely recommended to use unified2 if you're not using it yet.
7. If you still using unified1, use alert_unified or (log_unified), or just unified but patching snort.
Thanks to:
==========
Jaime Blasco and Juan Blanco working on the ossim-agent "arakiri".
Matt Jonkman and Victor Julien (a pcap generated with the splicer script was the starting point for the scapy scripts).
Carlos Terrón for his great unified1 parser.
The OSSIM project.
Credits:
========
Pablo Rincón Crespo 31/07/2009
pablo.rincon.crespo
at gmail
Wednesday, September 16, 2009
another birthday present
To:
Subject: [Snort-devel] Snort 2.8.5 Now Available
Snort 2.8.5 is now available on snort.org, at
http://www.snort.org/
[...]
Monday, August 10, 2009
Friday, June 19, 2009
pcap to scapy
Script that generate a python file with the packet generation code that Scapy need to replicate the traffic of a pcap file. I hope it would be useful for someone when testing NIDS features :)
## pcap2scapy.py ##
###################
# Author: Pablo Rincon Crespo
# mail: pablo@ossim.net
# Comments: This script read a pcap and write a .py with the scapy commands needed to replicate the traffic.
from scapy import *
import sys
if len(sys.argv) ==2:
print "Parsing "+str(sys.argv[1])
else:
print "Usage: python "+sys.argv[0]+" file.pcap"
exit(10)
pcap=rdpcap(sys.argv[1])
out=file(sys.argv[1]+".py","w")
out.write("from scapy import *\n\nl=[]\n")
i=0
for p in pcap:
i=i+1
# p.display()
print "*** Scapy packet "+str(i)+": ***"
print p.command()
out.write("p="+p.command()+"\nl.append(p)\n\n")
out.write("\n\n#sendp(l,iface='eth0')\n#wrpcap('/tmp/tmp.pcap',l)")
out.close()
print str(i) +" packets written in "+sys.argv[1]+".py"