Woman And computer
Human And Computer
Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

The Relationship of Sony Vaio Extended Life Battery Security and Granularity of Materials

If you haven?t involved into this industry for many years, the issues of material selection and application will be different tasks for you to learn by manufacturing practices. I don?t want to talk more about the process of production and circuit design more here as I am still not the professional technicist but the security is on no ways for us to ignore.

To be sure, as to manufacturers, diaphragm has been focused on more as important as electrode and electrolyte. Obviously, sony vaio extended life battery has released several times of updated cathode materials, including metal oxide, polyanion, spinel and so on. That is to say, the materials inside lithium-ion laptop battery have great significance on development in performance.

The granularity of materials mentioned here is more directly bound up with electrolyte which functions as the storage place. After lithium-ion laptop battery has been widely used in the market, insiders have communicated with each other frequently in the aspects of stability and actual capacity.

In the process of electrochemical reaction, these metal particles are extracted from lithium-ion cells and gain oxide and react with aqueous solutions. Secondary batteries in particular, all these electrolyte materials come to reduce in activity with time goes by. Thanks to increasing internal contaminates, particles won?t have kept their former shape and become harder to transfer inside the shells.

On the other hand, normally a certain laptop battery vgp bpl5a has its own set diaphragm of related vent hole in fixed level of transmissivity. Therefore, more effective particles can run through the cells to provide more power energy by higher rate. Imagine that large deal of used LiCoO and LiMnO are resided in the cells, carbon anode will produce more heat so as to cause overheating.

There?s no need for me to explain the harmfulness of overheating but the internal materials affected. Only being more careful in the process of researching can help us to avoid recalling latter with larger damages.

Reinventing the Sandpit

Sometimes it feels that the IT security world loves innovation as much as it loves to reinvent the wheel � particularly when it comes to wrapping sheets of tin around a previously established security technology and labeling it as advancement. The last few weeks have been no exception in the run up to the annual RSA conference in San Francisco and the recent �innovation� going on in dealing with next generation malware (or AV+ as some folks refer to it) as multiple vendors launch new appliances to augment their product portfolio.

The latest security technology to undergo the transformation to tin is of course the automated analysis of suspicious binaries using various sandboxing techniques. For those of you not completely familiar with sandboxing, a sandbox is effectively a small self-contained version of an computer environment offering a minimal suite of services and capabilities. As the name applies, a sandbox serves as a safe environment for running various applications that may be destructive in other circumstances � yet can be rapidly built up and torn down as necessary.

In an enterprise security context, sandboxes are regularly encountered in two operational security implementations � safe browser sandboxes (designed to wrap around the web browser and protect the operating system from any maliciousness that may occur while the user is browsing the web and prevent attacks from contaminating the base operating system) and gateway binary introspection (i.e. the automatic duplication or interception of suspicious binary files which are then executed within a sandbox that mimics a common operating system configuration for the purpose of identifying and classifying any malicious binaries they come across).

The sandbox approach to malware identification is often referred to as signature-less and offers many advantages over classic anti-virus technologies, but they also suffer from their own unique set of limitations and inconveniences � most have to do with the way in which malware can discover that it is being executed within a sandboxed environment and thus act benignly, and limitations to the faithfulness with which the sandbox imitates a genuine targeted system (e.g. installed applications, application version, Internet connectivity, etc.). In general though, sandbox approaches to automated malware inspection and classification are more sophisticated and accurate than signature-based anti-virus approaches.

Despite what you may have heard in the flurry of newly released AV+ solutions, automated malware sandbox approaches aren�t precisely new � in fact they�ve had over a decade of operational and, dare I say it, �hostile� use. For example, Damballa has been operating sandboxing technology in the cloud pretty much since the inception of the company. We�ve chosen to use multiple sandbox technologies (along with bare-metal systems, manual analysis, etc.) to automatically process the mountains of new malware captured every day to mechanically extract their network characteristics, automatically cluster new malware families, and provide attribution to multiple criminal organizations.

Note that, from a product perspective, Damballa doesn�t run malware sandboxing technology from within a customer�s environment � there�s little to be gained from doing so, and the risks greatly outweigh the possible gain. Instead, the automated analysis of suspicious and vetted binaries using cloud-based malware enumeration technologies (which includes very sophisticated sandbox approaches amongst other specialized malware dissection engines) has proven to be more accurate, efficient and secure.

Over the years, many different malware analysis sandbox technologies have been developed. For example (not a complete list):

  • Norman Sandbox (2001) � In 2001 Norman presents its sandbox technology for the first time at the Virus Bulletin conference in Prague and offers a commercial sandbox version in 2003.
  • CWSandbox (2007) � Originally created by researchers from University of Mannheim. Available commercially by GFI Software (formerly Sunbelt Software) and free/academic use via http://mwanalysis.org
  • Sandboxie (2006)
  • Anubis (2006)
  • Joebox (2007)
  • Azure (2008)
  • BitBlaze (2008)
  • ThreatExpert (2008)
  • Ether (2009)
  • Each sandbox technology tends to be implemented in a different way � usually optimized and tuned for specific classes of malware (or aspects of malware) � and typically utilize either an emulator or virtual-machine approach. Emulators tend to be much smaller and faster in analyzing specific classes of malware, but suffer from their greatly limited range of supported (i.e. emulated) operating system API�s. Virtual machine approaches tend to be much more flexible, but are larger and slower.
    Over the last decade, virtual machine (VM) based approaches have risen to the fore for automated sandbox approaches to malware investigation. The VM approach allows multiple guest OS images to be loaded simultaneously in order to run the malware within a self-contained and disposable environment. Interestingly enough, as a side note, did you know that the concept of running multiple, different operating systems on a single computer system harkens back to the 1970�s following research by IBM and the availability of the IBM VM/370 system? Talk about coming a full circle with �what�s old is new� again in security.

    For sandboxing technologies, a combination of API hooking and/or API virtualization is often used to analyze and classify the malware. A term you will often see is �instruction tracing� � which refers to the observations recorded by the sandbox technology which are eventually used to derive the nature of the binary sample under investigation. This instruction tracing lies at the heart of sandbox-based approaches to automated malware analysis � and is the Achilles heel exploited by evasive malware.

    Instruction tracing is typically implemented in one or more of the following ways:

  • User-mode agent � a software component is installed within the guest operating system and reports all user-based activity to the trace handler (think of this kind of like a keylogger).
  • Kernel-mode Patching � The kernel of the guest operating system is modified to accommodate tracing requirements (think of this kind of like a rootkit).
  • Virtual machine monitoring � The virtual machine is modified and instrumented itself to observe the activities of the guest operating system
  • System emulation � A hardware emulator is modified to hook appropriate memory, disk IO functions and peripherals (etc.) and report activities (think of this as a hall of mirrors approach). Emulation approaches are great for more difficult operating systems (e.g. Android, SCADA systems, etc.)
  • Unfortunately each of these sandboxing techniques exhibit system characteristics that can be detected by the malware being analyzed and, depending upon the nature of the malware, can be used programmatically to avoid detection.

    Despite all these limitations, the sandbox approach to malware analysis has historically proven to be useful in analyzing the bulk of everyday malware.
    In more recent years the techniques have become less reliable as malware developers have refined their sandbox detection methods and evolved more subtle evasion techniques. Many of these detection techniques are actually independent of the sandboxing technique being used � for example, the multitude of network-based discovery and evasion techniques discussed in my previous whitepaper �Automated In-Network Malware Analysis�.

    The sandbox approach to automated malware identification and classification needs to be backed up with more advanced and complementary malware detection technologies. Organizations facing the brunt of targeted attacks and advanced persistent threats should make sure that they have access to sandbox analysis engines within their back office for the bulk processing of malware samples (running multiple configurations of the standard desktop OS builds (or gold images) deployed within the organization), and include a mix of bare-metal and honey-pot systems to handle the more insidious binary files. Even then, executing malware within your own organizations network or physical location is risky business for the reasons I covered in an earlier blog on the topic � you�re �damned if you do, and damned if you don�t�.
    If you�re going to go to all the effort of installing and maintaining malware analysis sandboxes within your own organization, my advice is to look beyond the latest installment of tin-wrapped hype and take a closer look at the more established sandbox technologies out there. There�s plenty of choice � and many are free.

    Post-emptive Detection

    Labels: ,

    In the week before RSA I managed to pull together a blog on the Damballa site covering several of the problems with approaches that focus upon storing "all" the data and (eventually) data mining it in the quest for security alerts - aka Store it all in my barn. Here's what I had to say...

    The other week I spoke at the DoD Cyber Crime Conference here in Atlanta and had a number of questions asked of me relating to the growing number of vendors offering �store it all� network monitoring appliances. That whole approach to network monitoring isn�t an area of security I�ve traditionally given much credence to � not because of the practical limitations of implementing it, nor the inefficiencies and latency of the techniques � but because it�s an inelegant approach to what I think amounts to an incorrectly asked question.

    Obviously, given the high concentration of defense and law enforcement attendees that such a conference attracts, there�s an increased emphasis on products that aid evidence gathering and data forensics. The �store it all� angle effectively encompasses devices that passively monitor an organizations network traffic and store it all (every bit and PCAP) on a bunch of disks, tapes or network appliances so that, at sometime in the near future, should someone ever feel the need to or were compelled to, it would be conceptually possible to mine all the stored traffic and forensically unravel a particularly compelling event.

    Sounds fantastic! The prospect of having this level of detailed forensic information handy � ready to be tapped at a moment�s notice � is likely verging on orgasmic for many of the �lean forward� incident response folks I�ve encountered over the years.

    The �store it all� network monitoring approach is a pretty exhaustive answer to the question �How can I see what happened within my network if I missed it the first time?� But shouldn�t the question be more along the lines of �How can I detect the threat and stop it before the damage is done?�

    A �store it all� approach to security is like the ultimate safeguard � no matter what happens, even if my 20 levels of defense-in-depth fail, or someone incorrectly configures system and network logging features (causing events to not be recorded), or if multiple layers of internal threat detection and response systems misbehave, I�d still have a colossal data dump that can eventually be mined. Believe me when I say that I can see some level of comfort in adopting that approach. But the inefficiencies of such a strategy make my eye twitch.

    Let�s look at some scoping numbers for consideration. Imagine a medium-sized business with a couple-hundred of employees. Assume for the moment that all those folks, along with several dozen servers, are located at the same building. A typical desktop system has a 1Gbps network interface nowadays, and the networking �backbone� for a network of 250 devices is likely to have a low-end operating capacity of 10Gbps � but let�s assume that the network is only 50% utilized throughout the day. After a little number crunching, if you were to be capturing all that network activity and seeking to store it, you�d be amassing 54TB of data every day � so, perhaps you don�t want to capture everything after all?

    How about reducing the scale of the problem and focusing upon just the data going to and from the Internet via a single egress point? Let�s assume that the organization only has a 10Mbps link to their ISP that�s averaging 75% utilization throughout the day. After a little number crunching, you�ll arrive at a wholesome 81GB of data per day. That�s much more manageable and, since a $50k �store it all� appliance will typically hold a couple of Terabytes of data without too many problems, you�d be able to retain a little over three weeks of network visibility.

    How does this help your security though? Storing the data isn�t helping on a protection front (neither preemptive nor reactive), and it�s not going to help identify any additional threats you may have missed unless you�re also investing in the tools and human resources to sift through all the data.

    To use an analogy, you�re a farmer and you�ve just invested in a colossal hay barn, you�ve acquired the equipment to harvest and bundle the hay, and you�re mowing fields that are capable of growing more hay than you could ever seek to perpetually store. Then someone informs you that one of their cows died because it swallowed a nail that probably came from your hay � so you�d better run through all those hay bales stored in your barn and search for any other nails that could kill someone else�s cow. The fact that the cow that died ate from a hay bale that�s no longer stored in your (full) barn is unfortunate I guess. But anyway, you�re in a reactive situation and you�ll remain in a reactive phase no matter how big your barn eventually becomes.

    If you�ve got a suspicion that metal objects (nails, needles, coins, etc.) are likely to be bad juju, shouldn�t you be seeking them out before you�ve gone to all the work of filling your barn with hay bales? Wouldn�t it make more sense to perhaps use a magnet and detect those metal objects at the time you�re cutting the hay � before you�re putting it in a bale, and before you put those bales in your barn? Even if you had no forethought that metal objects in your hay could cause eventually a problem, do you persist with a strategy of periodically hunting for the classic �needle in a haystack� in your barn despite now knowing of the threat?

    Getting back to the world of IT security and threat detection (and mitigation)� I�ve found that there are greater efficiencies in identifying threats as the network data is streaming by � rather than reactive post-event data-mining approaches.

    I guess I�ll hear some folks ask �what about the stuff they might miss?� There are very few organizations that I can think of able to employ the skills and resources needed to analyze the �store it all� network traffic at a level even remotely comparable to what a security product vendor already includes in their commercial detection offerings � and those vendors are typically doing their analysis in a streaming fashion (and usually with something more sophisticated than magnets).

    My advice to organizations looking at adopting �store it all� network monitoring appliances is the following:

    1. If you already have all of your protection and detection bases completely covered, maybe deploying these appliances makes sense � provided you employ the dedicated security analysts and incident response folks to make use of the data.
    2. Do you know what you�re trying to protect? �Store it all� approaches are designed to fill in the gaps of your other threat monitoring and detection systems. Is the threat going to be present at the network egress point, or will you need to store traffic from other (higher-volume) network segments? If so, be cognizant of how far back you can roll your eventual analysis.
    3. If you�re in to hording data for the purpose of forensics and incident response, a more efficient and cost effective approach may be to turn on (and optimize) your logging capabilities. Host logging combined with network logging will yield a very rich data set (and will often be richer than simply storing all network traffic) which can be mined much more efficiently.
    4. If host-based logging isn�t possible or is proving to be too unwieldy, and you find yourself having to maintain a high paranoia state throughout the organization, you may want to consider implementing a flow-based security approach and invest in a network anomaly detection system. That way you�ll get near real-time alerting for bespoke threat categories � rather than labor-intensive reactive data-mining.
    5. If you have money to burn, buy the technology and begin storing all the PCAP data you can. Although I�d probably opt for a Ferrari purchase myself�

    Threat Landscape in 2011

    OK, so it's that time of the year again and all the security folks are out making predictions. And, as usual, I have a number of inbound calls for me to pump out the same. Not necessarily "the same" predictions though - since why would marketing and PR teams want to pimp "the same" predictions as everyone else... that'll never get mentioned in the press... ideally a few predictions about how the world will come to an end and preferably in a way that no one has though of before. You know the sort of prediction I mean - "By the end of 2011, cyber criminals will have full control of the electronic systems that control sewer pipes in the US and will be extorting cities for millions of dollars - or else they flood the city and cause massive deaths from typhoid and plague."

    Cynicism in the run up to Christmas? Bah-humbug :-)

    Anyway, despite all that, "predictions" can be pretty useful - but only if they're (mostly) correct and can be actionable. So, with that in mind, I've posted some "expectations" (rather than predictions) for 2011. I think it's important to understand the trends behind certain predictions. A prediction that comes from no where, with no context, and with no qualification is about as helpful as a TSA officer.

    Here are the 2011 predictions (aka expectations) I posted on the Damballa blog:

    1. The cyber-crime ecosystem will continue to add new specialist niches that straddle the traditional black and white markets for both the tools they produce and information they harvest. The resulting gray-markets will broaden the laundering services they already offer for identities and reputation.
    2. Commercial developers of malware will continue to diversify their business models and there will be a steady increase in the number of authors that transition from �just building� the malware construction kits to running and operating their own commercial botnet services.
    3. The production of �proof-of-concept� malware, hitherto limited to boutique penetration testing companies, will become more mainstream as businesses that produce mechanical and industrial goods find a greater need to account for threats that target their physical products or production facilities.
    4. 4. Reputation will be an increasingly important factor in why an organization (or the resources of that organization) will be targeted for exploitation. As IP and DNS reputation systems mature and are more widely adopted, organized cyber-criminals will be more cognizant of the reputation of the systems they compromise and seek to leverage that reputation in their evasion strategies.
    5. The pace at which botnet operators update and reissue the malware agents on their victims� computers will continue to increase. In an effort to avoid dynamic analysis and detection technologies deployed at the perimeter of enterprise networks or operating within the clouds of anti-virus service providers, criminal operators will find themselves rolling out new updates every few hours (which isn�t a problem for them).
    6. Malware authors will continue to tinker with new methods of botnet control that abuse commercial web services such as social networks sites, micro-blogging sites, free file hosting services and paste bins � but will find them increasingly ineffective as a reliable method of command and control as the pace in which takedown operations by security vendors increases.
    7. The requirement for malware to operate for longer periods of time in a stealthy manner upon the victim�s computer will become ever more important for cyber-criminals. As such, more flexible command and control discovery techniques � such as dynamic domain generation algorithms � will become more popular in an effort to thwart blacklisting technologies. As the criminals mature their information laundering processes, the advantage of long-term host compromises will be evident in their monetary gains.
    8. The rapidity in which compromised systems are bought, sold and traded amongst cyber-criminals will increase. As more criminals conduct their business within the federated ecosystem, there will be more opportunity for exchanging access to victim computers and greater degrees of specialization.
    9. Botnet operators who employ web-based command and control portals will enhance their security of both the portal application and the data stolen from their botnet victims. Encryption of the data uploaded to the data drop sites will increase and utilize asymmetric cryptography in order to evade security researchers who reverse engineer the malware samples.
    10. The requirement for �live� and dynamic control of victims will increase as botnet operators hone new ways of automatically controlling or scripting repeated fraud actions. Older botnets will continue their batch-oriented commands for noisy attacks, but the malware agents and their command and control systems will grow more flexible even if they aren�t used.

    In situ Automated Malware Analysis

    Over the past few years there's been a growing trend for enterprise security teams to develop their own internal center of excellence for malware investigations. To help these folks along, there's been a bundle of technologies deployed at the network perimeter to act as super-charged anti-virus detection and reporting tools.

    There's a problem though. These technologies not only tend to be more smoke and mirrors than usual, but are increasingly being evaded by the malware authors and expose the corporate enterprise to a new range of threats.

    Earlier this week I released a new whitepaper on the topic - exposing the techniques being used by malware authors and botnet operators to enumerate and subvert these technologies. The paper is titled "Automated In-Network Malware Analysis".

    I also blogged on the topic yesterday over on the Damballa site - here.

    Cross-posting below...

    Automated In-Network Malware Analysis

    Someone once told me that the secret to a good security posture lies in the art of managing compromise. Unfortunately, given the way in which the threat landscape is developing, that �compromise� is constantly shifting further to the attacker�s advantage.

    By now most security professionals are aware that the automated analysis of malware using heavily instrumented investigation platforms, virtualized instances of operating systems or honeypot infrastructures, are of rapidly diminishing value. Access to the tools that add sophisticated evasion capabilities to an everyday piece of malware and turn it into a fine honed one-of-a-kind infiltration package are simply a few hyperlinks away.

    Embedding anti-detection functionality can be achieved through a couple of check-boxes, no longer requiring the attacker to have any technical understanding of the underlying evasion techniques.

    Figures 1 & 2: Anti-detection evasion check-boxes found in a common Crypter tool for crafting malware (circa late 2008).

    Throughout 2010 these �hacker assist� tools have been getting more sophisticated and adding considerably more functionality. Many of the tools available today don�t even bother to list all of their anti-detection capabilities because they have so many � and simply present the user with a single �enable anti�s� checkbox. In addition, new versions of their subscriber-funded tools come out at regular intervals � constantly tuning, modifying and guaranteeing their evasion capabilities.

    Figure 3: Blackout AIO auto-spreader for adding worm capabilities and evasion technologies to any malware payload. Recommended retail price of $59 (circa August 2010).

    Pressure for AV++

    In response to the explosive growth in malware volumes and the onslaught of unique one-of-a-kind target malware that�s been �QA Tested� by their criminal authors prior to use in order to guarantee that there�s no desktop anti-virus detection, many organizations have embarked upon a quest for what can best be described as �AV++�.

    AV++ is the concept behind some almost magical array of technologies that will capture and identify all the malware that slips past all the other existing layers of defense. Surprisingly, many organizations are now investing in heavily instrumented investigation platforms, virtualized instances of operating systems or honeypot infrastructures � all the things that are already know to have evasions and bypassing tools in circulation � despite the evidence. Has fear overcome common sense?

    An area of more recent concern lies within the newest malware creator tool kits and detection methodologies. While many of the anti-detection technologies found in circulation over the last 3-4 years have matured at a steady pace, the recent investments in deploying automated malware analysis technologies within a targeted enterprise�s network have resulted in new innovations and opportunities for detection and evasion.

    Just as the tactic of adding account lockout functionality to email accounts in order to prevent password bruteforcing created an entirely new threat (the ability to DoS the mail system by locking out everyone�s email account) so we see the development of new classes of threats in response to organizations that attempt to execute and analyze malware within their own organizations.

    In a �damned if you do, and damned if you don�t� context, the addition of magical AV++ technologies being deployed within the borders of an enterprise network has opened the doors to new and enhanced evasion tactics.

    To best understand the implications and dynamics of the new detection and evasion techniques being used by the criminals targeting businesses I�ve created a detailed white paper on the topic.

    Intel Pentium Processor "Performance Upgrade"

    Labels: , , ,

    Catching up with some of the RSS feeds I monitor earlier today I came across some chatter about the newly launched/noticed upgrade option for Intel processors. Specifically, the $50 upgrade option to the new Pentium G6951.

    So whats all this about? Apparently, the new processor can be "upgraded" by purchasing what amounts to a license key for turning on the embedded functionality of the chip. Or, to put it another way, you've purchased a PC with a downgraded Pentium processor with disabled features - but can "enable" those features at a later date by simply purchasing the aforementioned "upgrade card".

    There's a lot of fervor concerning this particular innovation from Intel. Granted, the concepts aren't particularly new and other technology companies have tried similar tactics in the past (e.g. I was once told that the IBM Z-Series mainframes ship with everything installed but, depending upon the license you purchased, not all the capacity/features of the system are enabled), but It's not something I'm a particular fan of. Then again, it would seem to me that I'm probably not the type of consumer that Intel would be marketing this product strategy to either.

    The Intel site describing the upgrade technology/processes/etc. can be found at http://retailupgrades.intel.com/ - although it does appear to still be in a state of "under construction" as evidenced with the following response to the FAQ question of "Which PC's with this upgrade work on?"


    Good luck with this one Intel. It's not like I'll be buying any product (Intel or other) knowing that it had been intentionally disabled and subject to an additional fee for activation.

    The exception would be if I felt like doing a bit of RE to get the full functionality without buying in to the whole marketing "vision" (subject to license agreements, yadda, yadda, yadda...).

    Musings on Metasploit

    Labels: ,

    The week before last I attended and spoke at the OWASP AppSec 2010 conference on the first day, meanwhile HD Moore spoke on the second day.

    It's always fun to watch HD Moore as he covers the latest roadmap for Metasploit - explaining the progress of various evasion techniques as they're integrated in to the tool and deriding the progress of various "protection" technologies.

    A couple of things he said at the time stuck in my mind and I've been musing over them throughout last week. One comment - in response to a question that had been raised - was that IDS/IPS evasion is already sufficient within Metasploit and that further techniques would be "like kicking a cripple kid". Granted, not very PC - but that's the purpose of such statements.

    I agree to a certain extent that IDS/IPS technologies can be evaded - but there's a pretty broad spectrum to IDS/IPS technologies and 'one size doesn't fit all'. For example, HD Moore mentioned that simply using HTTP compression (i.e. GZIP) is enough to evade the technology. Not so. For IDS/IPS technologies with full protocol parsing modules (rather than packet-based signature matching) such techniques won't work. But that's by the by. Depending upon the sophistication of the attacker and their knowledge of the strengths and weaknesses of the IDS/IPS technology, evasions can often be found in short order (depending upon the type of vulnerability being exploited). While it's obviously to HD Moores advantage to talk a good game on behalf of Metaspolit and novel evasion techniques, it doesn't hurt to be reminded that there is an agenda to making such broad claims.

    The other comment he made related to the progress of adding more advanced payloads and exploit techniques. While I can't remember precisely the terms he used, the way he was discussing the topic - how much fun everyone was having inventing and developing the new techniques - I couldn't help by feel a little ashamed that things within the professional (attack-based) security field had reached this level.

    What do I mean? Well, the way in which HD Moore was describing things to the audience I couldn't help but think in terms of physical weapons research. The description of the nestled exploit and evasion modules and how the developers/researchers were going about developing better, faster and more efficient techniques made me visualize a game of one-up man-ship between bullet designers. Something like the following...

    Researcher 1: I think we should make a bullet that's Teflon coated but acts like a dum-dum bullet that expands to make a bigger hole in the target.

    Researcher 2: No, I've got a better idea. Instead of using the dum-dum style of bullet, I've come up with a way of making it fragment quicker and completely eviscerate the target internally.

    Researcher 1: How about we add that new flaming compound so that as the target gets eviscerated he'll combust at the same time.

    Researcher 2: That's cool! I bet there'll be crimson smoke coming out of the target too.

    Researcher 1: Ha ha. Cool! Lets build it and test it against those homeless people across the road.
    I'm guessing you're thinking that I'm perhaps a little warped in thinking these kinds of things (and for writing them down). But it's something that sprung in to my mind at the time and again last week. How much is too much?

    Granted, "good enough" protection can be defeated by using a "good enough" evasion technique. But I wonder when (or if) we'll ever need people to be more responsible for their actions developing what are effectively the cyber-equivalent of weapons? I strongly doubt that there'll ever be the cyber-equivalent of the Hague Convention though.

    Mobile Threats - Cellular Botnets

    Smartphones are getting smarter. You know it, I know it, and every would-be criminal botnet operator knows it too. But why haven't we seen many cellular botnets? It's not as if it's difficult to exploit, compromise or otherwise socially engineer a remotely controllable agent on to the handset.

    Thoughts on the topic went up on the Damballa blog site earlier today and are mirrored below...

    Last month I gave a couple of presentations covering the current state of cellular mobile botnets � i.e. malware installed on mobile phone, smartphone and cellular devices designed to provide remote access to the handset and everything on it. While malware attacks against dumb and smart phones are nothing new, the last 3 years of TCP/IP default functionality, compulsory data plans, access and provisioning of more sophisticated development API�s, have all made it much easier for malware developers to incorporate remote control channels in to their malicious software. The net effect is the growing �experimentation� of cellular botnets.

    I purposefully use the term �cellular� so as to focus attention on the botnet agents� use of the mobile Telco�s cellular network for Internet access � rather than more localized WiFi and Bluetooth services. Worms such as Commwarrior back in 2005 made use of Bluetooth and MMS to propagate between handsets � but centralized command and control (CnC) was elusive at the time (thereby greatly limiting the damage that could be caused, and effectively neutering of any criminal monetization aspirations). More recently thoughh, as access to the TCP/IP stack within the handsets has become more accessible to software developers through better API functionality by the OS vendors, the tried and tested CnC topologies for managing (common) Internet botnets are be successfully applied and bridged to cover cellular botnet control.

    Discussions about Smartphone botnets are making it to the media more frequently � albeit mostly the IT and security press � for example, �Botnet Viruses Target Symbian Smartphones�. Based upon the last couple of presentations I�ve given on the topic, lots of people are worried about cellular botnet advances � no more so than the Telco providers themselves.

    Sure, there are plenty of ways of infecting a Smartphone � successful vectors to date have been through Trojaned applications, fraudulent app store applications, USB infections, desktop synchronization software, MMS attachments, Bluetooth packages, unlocking platform application downloads/updates, etc. � but relatively little has been publicly discussed about the use of exploit material. As we all unfortunately know, one of the key methods of infecting desktop computers is through the exploitation of software vulnerabilities. Are we about to see the same thing for Smartphones? Will cellular botnets similarly find that handset exploitation will be the way to propagate and install botnet agents?

    In all likelihood, vulnerability exploitation is likely to a lesser problem for Smartphone � at least in the near future. Given the diversity in hardware platforms, operating systems and chip architectures, it�s not as easy to create reliable exploits that can affect more than one manufacturers line of product. That said though, some product lines are numbered in the tens of millions of devices, and the OS�s are becoming increasingly better at making the underlying hardware transparent for malicious software and exploitation. I�ll also add that there are plenty of vulnerabilities, �reliable� exploits up for sale and interested researchers bug hunting away � but at the moment there�s little financial gain for professional botnet operators compared to the well established (and much softer) desktop market of exploitable systems. But we have to be careful to not marginalize the threat, it�s worth understanding that botnets are already being developed and (in very limited and targeted distribution) are being used for installing botnet agents on vulnerable handsets.

    This is of course causing increasing heartburn for the mobile telco providers � since their subscription models essentially mean that they�re responsible for cleaning up infected handsets and removing the malicious traffic, much more so than traditional ISP�s are. If a handset is infected, their customer will likely incur a huge bill and (as what typically happens) the Telco will not be able to recover the losses from the customer. Attempts to recover the cost from the customer will increasingly yield two results � 1) they won�t be a customer any longer and 2) the negative PR will have them rolling in pain.

    Fortunately, as the cellular botnets become more common and sophisticated in their on-device functionality, they�re also going to become more mainstream and closely related to classic Internet botnets. What this means is that their CnC channels and infrastructure will increasingly be close to (or the same as) �standard� botnets. Which in turn means that cellular botnets can be thwarted at the network layer within the mobile Telco operator�s own networks (similar to what some major ISP�s are trialing with their residential customers) � thereby turning the threat in to something that they can protect against. How is that possible? Well, a quick browse of the Damballa website should provide a fair bit of insight in to that � and perhaps I�ll post a follow-up blog on key techniques sometime soon.

    Gold dust or Nuggets? A Hackers Tell

    After a hard day's conferencing, security folks will typically end up in the hotel bar and, with odds often appearing to be in excess of 3:1, the conversation will inevitably encompass a discussion of which internal corporate systems are the most hacked/vulnerable/indefensible.

    If the migratory cluster of bar stools and hotel chairs encircling the obligatory way-too-small table contains more than a pair of reformed hackers or pentesters, by listening in you'll end up gaining quite a bit of insight in to why the better hackers are so often successful (and you'll probably also pick up a few tell's for future reference).

    While there's much literature and many tutorials to be found that explain the technical aspects of how to successfully compromise corporate defenses, exploit systems and ultimately extract data, there's actually very little "guidance" on which systems should be targeted and why, once you've breached the network. Sure, there's plenty of discussions covering the technical aspects of how to raise privileges (e.g. locating and exploiting the Active Directory server in order to acquire corporate user/admin credentials etc.), but which systems really provide the treasure trove?

    Quite a few folks I've been speaking with will initially (and specifically) target the systems used by the corporate security teams. These systems are important for a couple of reasons; 1) internal security folks often have good access to a wide range of other systems that may be valuable and 2) By keeping an eye on the "watchers" you'll know when you're close to being caught and can stay a couple steps ahead. Personally, I think it's a ballsy move if you can pull it off - but it's not something I'd throw in as a priority. There are a lot of inherent risks in trying to tackle systems maintained and watched by the professionally paranoid - so it may be more prudent to gather better intel first.

    Another primary target for some folks is to go after the obvious corporate data repositories - the backend databases, business intelligence systems and storage facilities. This mode of attack I'd associate much more with the quick "get in and get out of dodge as fast as you can" - maximizing the potential reward by sacrificing (IMHO) a fair degree of stealthiness and persistence. If typically works very well - and is an ideal tactic for "compelling result" penetration testing or hackers looking for rapidly monetizable data.

    A tactic that I've always preferred (dependent upon the specific objectives of the pentest of course) is to initially locate and target the QA systems. For the folks that target the corporate secuity systems or go after the official data repositories, going after the QA systems sounds not only unexciting but also like a complete and utter waste of time. But hear me out first. QA systems really are a veritable treasure trove of corporate data. Consider the following:

    1. Like a smelly hobo camped outside a high-street McDonalds, both security analysts and helpdesk alike tend to keep their distance from (what are typically) "unmanaged" QA systems.
    2. QA systems often contain complete copies of the high-value corporate data so that development teams and QA/Testing personnel can actually test the applications correctly. You'll often also note that the more "valuable" a particular suite of data, application or business process is, the higher the probability that the QA copies of the data will in fact be real-time mirror images of live data.
    3. Nobody ever "owns" the QA systems. They're always the last systems to get patched (if ever) and access controls typically hover between poor and non-existent.
    4. When was the last time anyone bothered to look at the audit logs? With so many ad-hoc system use, trials and testing, it's a nightmare from both a detection and forensics perspective. QA systems are an ideal place to recon an enterprise network from and retain a persistent toe-hold within the organization.
    5. QA systems typically have "temporary" access to to all the core business systems and data repositories within a corporate network. By "temporary" I mean in theory if you listen to the server administrators - in practice they can be considered permanent gateways.
    6. Testing systems are typically littered with copies of entire development source code trees - making it a piece of cake to acquire the latest business logic, intellectual property or hard-coded/embedded passwords to other critical systems within the corporate entity.
    Sure, there's plenty of other opportunistic systems to go after within a target's organization once they've been breached, but with all other factors being equal, there are certain tactical tell's that can be readily associated with the types of hackers and pentesters out there (the previous three just being examples I heard/discussed repeatedly over the last couple of weeks).

    The primary objectives and "styles" of the hackers/pentesters reminds me a little of those old Western gold-rush films. Rounding up the Sheriff and his deputies and locking them up in their own jail before robbing the bank is a little analogous to going after the security folks/systems. Meanwhile the priority targeting of the corporate data repositories reminds me of a stagecoach robbery - the pounding of hooves and guns blazing. Yet going after the QA systems reminds me of a movie in which the villains dig up the ground under the saloon and casino - hoovering up all the gold dust that patrons had lost over the years through the cracks in the floorboards.

    Grab a beer with a friendly hacker or pentester and ask them how they'd earn their gold.

    Anti-FUD FUD

    Labels: ,

    Like the cycling of the moon, the security industry also exhibits periods of waxing and waning on particular issues.

    At the moment it looks like were entering the Waxing Gibbous stage for anti-FUD (Fear, Uncertainty and Despair) movement. In recent weeks the proliferation of calls to deal with FUD within the security industry has picked up. Depending upon the particular sector, you'll encounter discussions about overcoming the fears associated with shifting data in to the cloud, why "advanced" threats aren't so important if the bulk of attacks don't need to be, etc.



    As you'd expect, there are quite a few security folks who make their dime by being vocal about a particular topic, and it's that time of the cycle that the anti-FUD speeches get dusted off and replayed. That's not to say that the anti-FUD folks are unique. There's an biannual waxing and waning to the Full Disclosure movement too, along with annual revisits to the topic of Vulnerability Purchasing Programs, etc.

    The anti-FUD movement consequently promotes their own kind of "FUD" - speculating that the world would be a better place if FUD ceased to exist in the security world, and that organizations would be better able to prepare their defenses without the distractions of the next biggest threat.

    Some aspects of the anti-FUD cause I might just agree with, but in general I'm less inclined to to follow much of rhetoric from die-hard security officinardos. Why? Well, for the most part, many of their statements are naive in that they obviously fail to understand the world they live in. Listening to them you'd think this is an IT security problem - but in reality "FUD" is a critical element of the sales cycle - regardless of whether you're selling car tires or anti-zit cream.

    Every second car advertisement on TV extols the virtue of their safety features, even drunk-driving and "wear your seat-belt" literature distributed state authorities cover the gruesome consequences of not following the rules and taking appropriate actions. FUD gains the attention of the viewer/reader, educates them in some capacity and makes them think more about the consequences of their actions (or inaction's).

    FUD is everywhere - just watch the ads covering Zit cream and Tampons on TV, and you'll get the idea. FUD is a critical element of the sales cycle by eliciting a reaction to the message (generally - aiming for a buying reaction).

    Folks that jump on their anti-FUD high horses, from my own experience, tend to struggle with commercial sales because they fail to understand what FUD is all about - education, compulsion and sales.

    Having said all that, lets not go to the other extreme though. In order to make their FUD more compelling and elicit a greater compulsion for listeners, some sales folks will stretch the truth in to the realm of fiction. These folks need to be quickly reigned-in by the company paying their paycheck. To do otherwise would inevitably result in pissed off customers and a loss of business.

    Final thoughts? The security industry is no different from any other industry with innovative products aimed at solving the problems of today and the future. FUD is a way of life, get used to it.

     
    Internet