Sunday, September 5, 2010
On “Finding Security Vulnerabilities in Java Applications using Static Analysis”
The most common vulnerabilities in web applications nowadays exploit weak input validation implemented on applications. Attackers can enter specially crafted inputs to the application, making it perform actions which can caused breach of confidential information stored in back-end servers, bring down applications causing denial of service to legitimate users, reveal user credentials to attackers, etc. Examples of attacks are SQL injection and Cross-site Scripting, which belongs to the OWASP top 10 web applications security risks [2].
The paper describes a tool which could be used in probing Java applications for security vulnerabilities stemming from not properly validated input from the user, which they called tainted propagation problems. The tool performs a static analysis on the applications’ byte-code. It tries to determine possible sources of vulnerabilities within the application based on a detailed specification crafted using PQL, a program query language. Because their tool uses improved object naming scheme and more precise context-sensitive pointer analysis, it produces more accurate results, with less false-positives.
One of the strengths of the tool is that, it can be used by developers during the actual development cycle, which in a way cut costs of the development, because it minimizes costs coming from activities like code reviews. The authors also suggested that their approach can be used to implement the same static analysis tool for other byte-code based languages like C#.
REFERENCES
[1] V. Benjamin Livshits and Monica S. Lam. Finding Security Vulnerabilities in Java Applications using Static Analysis.
[2] OWASP Top 10 – 2010: The Ten Most Critical Web Applications Security Risks. The Open Web Security Project. At [www.owasp.org]
Thursday, August 26, 2010
On “Why Cryptosystems Fail” and “Father Guido Five Minute University”
The point of Father Guido in his idea of a Five Minute University is that the value of education we get over the years of our stay in a learning institution is relevant to their usability in dealing with the realities we face in the outside world. He says that we usually remember just the things that are actually of use to us (in our job for example), and tend to forget everything else.
In the same manner, the quote above implicitly states the real value/strength of any security system how complex or simple it may be. It’s very common to implement multi-level security system these days. And the value of these security systems is just as good as the weakest component in the chain. A single hole is enough to enable attackers to break into the whole system. In the article, it was stated that most of the attacks on cryptosystems do not exploit their technical weakness, but rather directed to other aspects of the system i.e. physical implementation, poor management, quality control.
So the author suggested a shift in the way we think, in our evaluation of the strength of computer security systems. He said that efforts should be diverted on strengthening the competence of the people involved in implementing other aspects of a security system.
References:
[1] Anderson, Ross. Why Cryptosystems Fail.
[2] _____. Father Guido Sarducci Five Minute University. @[youtube.com]
Monday, July 5, 2010
On "Setuid Demystified" and "Understanding Android Security"
The first paper, as its name implies, tried to unravel the way uid-setting systems calls in Unix-based operating systems work. What are these uid-system calls anyway? Well, it’s a sort of an API which one could use/call in Unix-based systems to set or to drop, temporarily or permanently, the privileges i.e. resources, a program can access. For example, when you want a program to run with root privileges, then you’ll probably call one of the functions within this API.
One of the key issues that the researches tried to address on their study is the inconsistencies of the behaviour of these system calls on several Unix-based operating systems. The researchers attributed this to the “ambiguous” or lack thereof, of a specification documenting what these system calls do and how should they do it i.e. the rules or invariants that concern each function. This could possibly lead to some serious security and portability issues on applications developed for such systems as described in the article.
On their study, the researches first tried to determine the behaviour of these system calls across several implementations by examining their actual source files. After trying such approach, they eventually realized the impracticality of what they’re doing, and then they decided to build a more formal methodology. They have implemented a Finite State Automaton (FSA) that covers the different possible states of a particular process altered by the uid-system calls made within it [1]. They tried to automate the process of building such FSA model for each uid-system calls through simulation. Their simulator allowed them to create FSA models faster and less tedious. This in turn enabled them to easily compare the differences of the different FSA model generated from each uid-system call implementation. This lead to a much faster detection of the inconsistencies present in several implementations of each uid-system calls. Moreover, their formal method makes the automation of such detection mechanism possible. The formal method also led to the discovery of some security vulnerabilities in some implementation of the uid-system calls.
The one thing that caught my interest in this paper is the way by which the researchers developed a formal method for an activity that is rather tedious if done manually. Of course they being successful in their study clearly intensifies the possibility of developing formal models for things that may appear so complex at times. Development of such formal methods also enables a deeper understanding of what’s happening “inside” albeit some necessary abstractions are employed.
The other article, “Understanding Android Security”, discussed the core security mechanisms employ on Android, a mobile operating system.
They discussed the mechanism by which Android manage the privileges of the different application running in it. The security framework employed by Android developers follows a permission labeling scheme. An Android developer assigns permission labels to his application by explicitly specifying them on an XML manifest file [2]. These permission labels specify how this particular application handles accesses made by other application to its components. This manifest file is use to set an application’s permission labels during installation and stays permanent until a new installation is made.
Although such security framework is inherent on the Android platform, programming errors made by security-unaware developers are still the major concerns which influenced the existence of security vulnerabilities on Android applications. In this regard, the researchers developed a tool, called Kirin, which automates the verification of consistency of the set of permissions defined in an application.
Automation seems everything, isn’t it?
References:
[1] H. Chen, D. Wagner, and D. Dean. “Setuid Demystified”.
[2] W. Enck, M. Ongtang, and P. McDaniel, “Understanding Android Security”. IEEE Security and Privacy. 2009, pp. 50-57
Wednesday, June 23, 2010
On Buffer Overflow Attacks
Although the issue of buffer overflows existed as early as the 1980s, it gained popularity as an immediate security threat only after when the wrath of the infamous Morris Worm [1] took in. As the first well-known computer worm to have made use of the concept behind buffer overflow, Morris Worm set the sail for an effort among computer professionals to develop tools and techniques that would mitigate and prevent further exploitations of such a common vulnerability.
Several efforts had been made that aim to prevent, and stop exploitation of such vulnerabilities. Each of them was proven to be successful in their own rights [1, 4]. But as time goes by, these preventive mechanisms were seen to have loop holes also, which defeated the security they provide [4, 5].
Although new programming technologies i.e. type safe languages, engrained bound checking in languages like Java, decreased if not completely eradicated such a threat, the existence of legacy systems and the continued support to it, keep the risks pose by such attacks at the highest level. I believe, in fairness to a language like C, that the problem is not really in the language per se; rather it is on the individuals who write programs using it. In general, without proper awareness and the surge of different factors during software development, it becomes harder for software developers to examine each of the different aspects of a computer program. Unfortunately, program security is one such aspect which is simply forgotten during development. Although some studies are being conducted on this particular area, for example [3], it still lacks major breakthroughs that would enable the adaptation of a standard security framework for building secured software applications.
Because it is in the nature of computer hackers to unravel vulnerabilities in existing systems widely used by the computing community and exploit them “for fun and profit” [2], it is a constant struggle for security professionals to cope and move ahead of them.
References:
[1] Crispin Cowan, Perry Wagle, Calton Pu, Steve Beattie, and Jonathan Walpole. Buffer Overflows: Attacks and Defences of the Vulnerability of the Decade
[2] “Aleph One”. Smashing The Stack For Fun And Profit. Phrack, 7(49), November 1996
[3] Gary McGraw, Brian Chess, Sammy Migues. Software [In]security: The Building Security In Maturity Model (BSIMM). InformIT, at [http://www.informit.com/articles/article.aspx?p=1332285]. March 16, 2009
[4] Alexander Sotirov, Mark Dowd. Bypassing Browser Memory Protection – Setting back browser security by 10 years.
[5] Bulba and Kil3r. Bypassing Stackguard and Stackshield. Phrack, at [http://www.phrack.org/issues.html?issue=56&id=5], May, 2000
Saturday, June 12, 2010
On the CRS Report for Congress “Botnets, Cybercrime, and Cyberterrorism: Vulnerabilities and Policy Issues for Congress” [6]
The report articulated several different scenarios by which how institutions and groups of individuals who have interest against the US government can make use of existing technologies to cripple the country’s economy. The report explored the possibilities of a coordinated attacked against US government-owned IT infrastructures. Although possibilities exist, concerned agencies downplay the extent of the real damage they can cause. They argued that recovery from such attacks can be handled in a way similar to how they handled natural calamities i.e. flooding, earthquake, or random machine breakdown in the past. Also, they argued that the cost of such attacks out-weighs the benefits they give, so these would deter anyone from even doing such things.
Another concern discussed in the report is the commercialization of the tools and the technical skills necessary to do cybercrimes. The ease by which one can earn from stealing financial information, trade secrets, etc. and selling them to underground markets lure more “brilliant” individuals into this kind of activity. The motivation of these attacks are no longer pure financial in nature. Some are initiated by groups to push political and social reforms [6].
The report made mention of Botnets all throughout. Russian-based Kaspersky Lab reported that the major threat plaguing the Internet today is the threat of botnets [1]. Botnets (Bot networks) are networks of compromised machine controlled by an attacker called the “bot master” [2]. Botnets are mostly responsible for the spread of malwares across the Internet that leads to theft of personal information and other sensitive data from government institutions and companies who store confidential customer information. Furthermore, botnets had been used in DDoS attacks and proved to be very efficient [4]. In most cases, computers which are generally infected are home-based personal computers which are usually unprotected or whose owners are not well aware of these security threats.
One of the major problems that security researchers faces in dealing with botnets and other security threats alike is the high level of technical proficiency of the individuals behind these threats. The technical complexities of the tools and techniques i.e. code encryption and obfuscation, which these hackers are using, gets higher such that security researchers are not able to get close at them. In most cases, such individuals monitor the activities of security researchers who are hitting on them which enable them to develop even better ways to avert and prevent detection [5].
Another factor is the severity of the infection it already caused to the Internet. The large number of infected computers and established C&C servers makes the complete take down of these networks much more difficult [3]. The use of peer-peer network architecture instead of the traditional C&C structure on botnets surfacing nowadays makes even harder for security professionals to alleviate the severity of the threats they cause.
In most cases, proliferation of malicious programs or Trojan horses (which turn a computer into a zombie) can be attributed largely to unsuspecting Internet users who are unaware of the different security risks lurking in the World Wide Web. I believe that a sufficient and massive information campaign of these security risks to the majority of the population of Internet users should be considered. I think it is safe to assume that majority of Internet users are not really aware of these prevailing security issues which make them even more vulnerable. We should increase everyone’s awareness about these security trends. Preventive security should be initiated at the end-user/client level. I don’t mean to cause paranoia among individuals who does not really want to be bothered of these things. But since it is the case that unsuspecting Internet users play major player in the spread of these botnets, we don’t have a choice but to force the issue on them. I am not saying also that this will put a stop to this kind of cyber-attacks. But at the very least, this initiative should at least decrease the number of infected systems and possibly prevent further infections in the future. And during these times, every bit of help we can get counts.
References:
1. [http://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci1030284,00.html]
2. Grizzard, Julian B. , et al., “Peer to peer Botnets: Overview and Case Study”. UneNix.org, at
[http://www.usenix.org/ event/hotbots07/tech/full_papers/grizzard/grizzard_html/]
3. Fisher, Dennis. “Botnets using ubiquity as security”. ThreatPost.com, at [http://threatpost.com/ en_us/blogs/botnets-using-ubiquity-security-060710]
4. “Robot Wars – How Botnets Work”. WindowSecurity.com, at [http://www.windowsecurity.com/ articles/Robot-Wars-How-Botnets-Work.html]
5. VitalyK. “Gumblar: Farewell Japan”. Securelist.com, at [http://www.securelist.com/en/blog/2132/Gumblar_Farewell_Japan].
6. Wilson, Clay, “ Botnets, Cybercrime, and Cyberterrorism: Vulnerabilities and Policy Issues for Congress”. CRS Report for Congress. January 29, 2008.
Tuesday, June 8, 2010
On Ken Thompson's Reflection on Trusting Trust
Early on, one can hear the humbleness from his voice. He expressed the importance of team work by recognizing other individuals who had collaborated and worked with him. When individuals in a team perform in a synergetic manner complementing the weaknesses of each other and taking advantage of each others strength, it leads to a situation where “ the whole is greater than the sum of its parts” 1.
By following his lecture closely, one can see that his delivery was invigorating in a way that he showed the build-up of his capability as a programmer in a gradual manner. It seems that he is saying implicitly that anyone, who has the interest and determination, can excel in the field as long as he/she is persistent. For the purpose of progress and advances of the literature in our field, this thought is very welcoming. The real issue comes when the intentions of such individuals possessing such valuable knowledge is questioned. Are they the bad or the good guys?
On the subject of security, specifically in programming or software development, one can see the dilemma that we are facing. In our field where “don’t reinvent the wheel” is the common mantra, we are faced with trust issues each time we decide to use a 3rd party application or API in the application that we developed. When we reuse a piece code, it’s easy for us to check if it contains malicious instruction. But when question on the integrity on the low level aspects i.e. compiler, assembler, etc. of the programming environment we are using, there comes the problem esp. if it’s not open-source2
Ken Thompson’s choice of topic to discuss in his lecture during a time when practitioners in our field faces moral and ethical issues because of the boom of individuals who called themselves hackers who undermine the integrity of the majority was very timely. He expressed vividly his position regarding the importance of honesty and trust in our kind of profession. His lecture was more of an open challenge to us practitioners of becoming morally and ethically ready when we work.
1 by Aristotle