Tuesday, 3 January 2012

updates

My Tool is starting to take shape, I have read allot of material over the last couple of months, that has lead me to this point. I have decided that as I am not learning any serious Linux on my course or new programming languages this year, I should incorporate this into my final year project. This way I can get a bit of a competitive edge over my peers, I have never been one to choose the easy path. decisions made to date and reasons why


  • should I use live capture?  it will cause to many rendering issues on a web based system with refreshes that would need to calculate where the last nodes were positioned (if you were using a link graph). also scalability problems creep in as you increase the number of nodes discovered on the network. I found two algorithms to help with this but I think its essentially trying to reinvent the wheel. considering the time constraints I want to concentrate my time on the visualisation as much as possible. 
  • where will I get logs ?  I have decided to use Pcap files only again due to time constraints I don't want to spend two weeks understanding new logs and their relevance. Although from what I have read Pcap files have there own issues in parsing.
  • What language should I use? I have decided from reading that I should use Perl as it has strong associations with Regular expressions making a fast powerful searching language (p107 learning perl O'Reilly) 
  • What UI should  I use if any? decided  that as I am using perl the main two UI would seem to be net-beans or eclipse however from looking at both there websites and forums support would appear to be stronger with Eclipse. 
these were decisions I made before Christmas from the reading I had done. But it was only since yesterday when I actually started trying to get my hands dirty that I really started to learn more about the technologies and there capabilities (there is only so much you can learn from a book). I issues I have either encountered myself so far or other who have been before me are listed below along with changes made to my plan. 

  • parsing issues for tcp dump. I discovered that these can be overcome if you use Afterglow for perl, if uses a set of perl scripts to create a CSV file that is standardised (unlike a standard CSV file from TCP dump where the destination and source addresses may vary etc). 
  • this lead me to start looking into from end display API's where I have discovered that JAVA has a nice API to work with a tool called Graphviz. this in turn can read one of the outputs of the afore mentioned afterglow as a dot file and spit it out as a SGV file (kind of like a XML) readable by Java. Please note this took about 11 hours of coding and reading. 
  • this has provided an end to end solution using two languages two external programs and four file formats but it works. 
  • on closer inspection I can see that afterglow seems to provide all the perl I need, pretty much. I still need to tweak it and write a little of my own, this means most of my coding will now actually be done in Java. and I will need to master TCP dump and Graph Viz in order to get the project running.    

No comments:

Post a Comment