AdelPlex
this site the web

Protection & Privacy towards a Safer Internet in the Arab Region

The increase in numbers and diversity of Internet users in the Arab world and in developing countries, has led to new risks and crimes. The workshop in an opportunity to continue the ongoing dialogue on the regional and international levels regarding the close relationship between access, protection and privacy , with a focus on two important pillars: protection and privacy of the family on the Internet.

On the occasion of the “Universal Children’s Day”, MCIT in coordination with Arab Information and Communication Technologies Organization (AICTO) is organizing an International Workshop under the slogan of: “Protection & Privacy... towards a Safer Internet in the Arab Region.

Watch Live Now ...

Google Runs Over a Million Servers

Google never says how many servers are running in its data centers. The new estimate is based on information the company shared with Stanford professor Jonathan Koomey, who has just released an updated report on data center energy usage. Download Link

Google’s David Jacobowitz, a program manager on the Green Energy team, told Koomey that the electricity used by the company’s data centers was less than 1% of 198.8 billion kWh – the estimated total global data center energy usage for 2010. That means that Google may be running its entire global data center network in an energy footprint of roughly 220 megawatts of power.

“Google’s data center electricity use is about 0.01% of total worldwide electricity use and less than 1 percent of global data center electricity use in 2010,” Koomey writes, while cautioning that his numbers represent educated guesses extrapolated from the company’s information. “This result is in part a function of the higher infrastructure efficiency of Google’s facilities compared to in-house data centers, which is consistent with efficiencies of other cloud computing installations, but it also reflects lower electricity use per server for Google’s highly optimized servers.”

South Korea blocked port 25 for Security Issues

Summary: South Korea is considering a nation wide block of port 25, as a anti-spam countermeasure aiming to reduce the volumes of spam affecting the country.



The ban, set to go in effect as of December, will replace port 25 with port 587 and 465 for SMTPS.

Spamming through web-based email is yet another way for cybercriminals to bypass the newly introduced regulations. Once the CAPTCHA-solving process for popular free web-based email providers has been outsourced to Indian providers of CAPTCHA-solving services, thousands of newly registered emails will be automatically used for outgoing spamming purposes, once again successfully bypassing the newly introduced regulation.

Mostly because of the way modern malware and spam networks operate. For instance, modern malware has built-in SMTP engines that are port-independent. Moreover, geolocated and malware-infected hosts within South Korea could be automatically updated using the new specs in a matter of seconds, once again continuing the abuse of legitimate networks, while playing by the newly introduced rules.

Code Vulnerability Scanner, Android Mobile App

The Nessus Android app enables you to log into your Nessus Scanner Server to Remotely Control your vulnerability scans by Applying start, stop and pause for your Hosted Internet Web-application as well as analyze the results directly from your Android device.

Download it NOW

Ruby's RSA Crypto Bug

The Ruby developers had a near miss with a crypto disaster when an "awful bug" crept into the language's source code development tree. A simple programming error made the library generate RSA keys that caused the encryption mechanism to produce clear text. Luckily, the error was caught before it made it to any release version of Ruby, but it provides a good example of how a simple error can have potentially far-reaching effects.

The RSA asymmetric encryption technique differentiates between secret and public keys. The public key consist of a modulus n and an exponent e. The plain text, m, is encrypted according to the mathematical formula c = m^e mod n

The point is that the cipher, c, can usually only be decrypted with the secret key. However, the Ruby bug generated RSA keys with an exponent e=1. This only leaves c = m mod n

As m is always less than n, the RSA formula collapses into a variation of the legendary ROT26 cipher : c = m

Among other things, RSA is used for digital signatures. A signature generated with a key from a Ruby system with the defect is equivalent of a blank cheque, as it will cause any signature to be considered valid. Incidentally, the problem was caused by a trivial programming error that has nothing to do with cryptography: in a for loop for setting individual bits, the criterion for abandoning the loop was set incorrectly, causing every loop to be abandoned after the first iteration.

The problem only affects programs that have generated RSA keys with development versions of Ruby. The recent release of Ruby 1.9.3 is not affected by the problem. Where the problem does exist, the encryption and decryption functions appear to be working correctly; the bug has no effect on externally generated keys that are imported. Users of the development version of Ruby should check their Ruby programs if they generate keys and, if necessary, generate new keys as soon as possible.

Juniper Routing Problem disrupts Level 3 network

Yesterday several US and UK ISPs, including Time Warner Cable, Research in Motion, Eclipse Internet, Easynet and Merula, reported a range of errors and problems on the Level 3 backbone. Level 3 has now confirmed the reports. The cause of the problems appears to have been a bug in Juniper's Junos router operating system affecting the border gateway protocol (BGP).

US ISP Phyber Communications has told various US media organisations that other networks using Juniper routers were also affected by the failure, with most affected devices generating a memory dump and then restarting. Juniper manager Mark Bauhaus confirmed that the company had been made aware of the BGP error in edge routers on Monday morning. He stated that the bug had affected only a small number of Juniper customers and that the company already had a patch for the problem which was awaiting distribution to affected routers.

Source: Linux.com

XtreemOS, Enabling Linux for the Grid

XtreemOS is a Linux-based Operating System, supporting Virtual Organizations over Grid Computing platforms. The development of XtreemOS is currently funded as an Integrated Project by the European Commission under the Sixth Framework Programme (FP6) sponsorship program. The project started in June 2006 to last for 48 months, thus ending in May 2010. As of beginning 2010, the project has been extended and will last until September 2010. The project is led by INRIA and involves 19 research and industrial partners from Europe and China.

Download it :)

Slow HTTP DoS Attacks

the recent OWASP AppSec DC presentation on Slow HTTP POST DoS attacks, the issue of web server platform DoS concerns have reached a new high. Notice that I said, web server platform and not web application code. The attack scenario raised by slow HTTP POST attack is related to web server software (Apache, IIS, SunONE, etc...) and can not be directly mitigated by the application code. In the blog post, we will highlight the two main varieties of slow HTTP attacks - slow request headers and slow request bodies. We will then provide some new mitigation options for the Apache web server platform with ModSecurity.

Network DoS vs. Layer-7 DoS

Whereas network level DoS attacks aim to flood your pipe with lower-level OSI traffic (SYN packets, etc...), web application layer DoS attacks can often be achieved with much less traffic. The point here is that the amount of traffic which can often cause an HTTP DoS condition is often much less than what a network level device would identify as anomalous and therefore would not report on it as they would with traditional network level botnet DDoS attacks.

Layer-7 Connection Consumption Attacks

Ivan Ristic brought up the concept of connection consumption attacks in his 2005 book "Apache Security":

5.4.3. Programming Model Attacks

The brute-force attacks we have discussed are easy to perform but may require a lot of bandwidth, and they are easy to spot. With some programming skills, the attack can be improved to leave no trace in the logs and to require little bandwidth.

The trick is to open a connection to the server but not send a single byte. Opening the connection and waiting requires almost no resources by the attacker, but it permanently ties up one Apache process to wait patiently for a request. Apache will wait until the timeout expires, and then close the connection. As of Apache 1.3.31, request-line timeouts are logged to the access log (with status code 408). Request line timeout messages appear in the error log with the level info. Apache 2 does not log such messages to the error log, but efforts are underway to add the same functionality as is present in the 1.x branch.

Opening just one connection will not disrupt anything, but opening hundreds of connections at the same time will make all available Apache processes busy. When the maximal number of processes is reached, Apache will log the event into the error log ("server reached MaxClients setting, consider raising the MaxClients setting") and start holding new connections in a queue. This type of attack is similar to the SYN flood network attack we discussed earlier. If we continue to open new connections at a high rate, legitimate requests will hardly be served.

If we start opening our connections at an even higher rate, the waiting queue itself will become full (up to 511 connections are queued by default; another value can be configured using the ListenBackLog directive) and will result in new connections being rejected.

Defending against this type of attack is difficult. The only solution is to monitor server performance closely (in real-time) and deny access from the attacker's IP address when attacked.

The issue at hand with these attacks is that the client(s) are opening connections with the web server and sending request data very slowly. For those of you familiar with the old LaBrea Tarpit app for slowing down network based worms, this is somewhat of a reverse approach. Instead of the defender (LaBrea) sending back a TCP Window size of 0 to the attacker (worm) which would force the TCP client to wait for a period of time before resubmitting, in this scenario the attacker is the one forcing the web server to wait. If a web client opens a connection and doesn't send any data to the web server then the web server will default to waiting for the connection's Timeout value to be reached. Wanna guess how long that time interval is in Apache by default? 300 seconds (5 minutes). This means that if a client can simply open a connection and not send anything, that Apache child process thread will sit idle, waiting for data, for 5 minutes. Ouch... So the next logical question to ask from the attacker's perspective is - What is the upper limit on the number of concurrent connections for Apache? This depends on your configs but the main ServerLimit directive has a hard coded value of 20000 (most sites run much less). This limit makes it very feasible for a much smaller number of DDoS clients to take down a site vs. the extremely large number required for network-based pipe flooding.

There are two types of attack to cover when a malicious client never sends a complete Request as specified by the HTTP RFC:

Request       = Request-Line                      ; Section 5.1
*(( general-header ; Section 4.5
| request-header ; Section 5.3
| entity-header ) CRLF) ; Section 7.1
CRLF
[ message-body ] ; Section 4.3


Google Android Malware

There is bunch of Android apps pulled from the Android Market because they contained malware. There were over 50 infected applications - these apps were copies of "legitimate" apps from legitimate publishers that were modified to include two root exploits and a rogue application downloader. This isn't the first example of malware on Android, but it may be the first to affect Google's own Android Market . This new malware has been referred to as DroidDream, RootCager, and myournet by various researchers and media outlets.

So how does this malware work? First of all, we can start with the basics of how Android apps work. Android applications are mostly written in Java and use XML files for configuration. The Android compiler suite takes a developer's Java files, compiles them to class files, and then converts the class files into dex files. Dex files are bytecode for the Dalvik VM that runs Android apps. The XML files are converted to a binary format that's optimized to produce small files. The dex files, binary XML files, and other resources needed to run an application are packaged into an APK file. These files have the extension .apk, but they're just ZIP files. Once the APK package is generated, it's signed with a developer's key and uploaded to the Android Market through Google's website.

When a user wants to install an app from the market, the APK package is downloaded and extracted onto their device. When an application is started, the Android device runs what is called an Activity in the application. The initial Activity, the program's entry point, is specified in a file called AndroidManifest.xml.

The AndroidManifest.xml file in infected packages was modified by the malware author to first launch the malware itself instead of the original application's Activity. To create the infected APK packages they unpackaged the original application's APK file, modified files and inserted their malicious code, then re-packaged the app and signed it with their own key. The apps were uploaded to the market, and in some cases were downloaded tens of thousands of times. This is the modified portion of theAndroidManifest.xml file used by this malware:

These settings cause the com.android.root.main Activity to be executed when the application is launched. It also configures two background services located in the classes com.android.root.Setting and com.android.root.AlarmReceiver. In the original application, the main Activity is in a completely different class.

What does this com.android.root.main Activity do when it runs? There are a few different ways to figure this out. My current preferred method is disassembling the Dex bytecode using the baksmali disassembler which produces human-readable disassemblies of compiled classes. It's also possible to decompile Dex files into Java sometimes, but it's still good to be able to read the disassembled code. Here is a smali snippet showing what the com.android.root.main.onCreate() function does when it's executed by the Android system:

This code starts the com.android.root.Setting service. It works by creating an instance of the Intent object, passing com.android.root.Setting's class object to the constructor. Then it calls the startService() method of the Intent object. This is roughly equivalent to this Android Java code:

  Intent intent = new Intent(this, com.android.root.Setting.class);   startService(intent); 

Next, the malware will start the original main Activity for the application:

This code locates the class for the original Activity by calling the static functionClass.forName(), passing the Activity's class name as the argument. If the class is found, a new Intent instance is created for the class and then startActivity() is called. Here's a Java representation, without the exception handling:

  Class klass = Class.forName(“net.luck.star.mrtm.HomeActivity”);   Intent intent = new Intent(this, klass);   startActivity(intent); 

After this, the com.android.root.Main Activity exits. So at this point, thecom.android.root.Setting service is running in the background and the original application starts up on the device. The unsuspecting user won't notice anything amiss.

The exploits

Now we can take a look inside of com.android.root.Setting. The onCreate()method of this class will attempt to get root access on the phone using two separate exploits - a udev exploit (CVE-2009-1185), and one based on adb resource exhaustion (the so-called rageagainstthecage exploit).

The udev exploit is executed by the com.android.root.udevRoot class. The actual exploit code is compiled C and is located in a file named exploid in the APK's assetsdirectory. This file is an ARM Linux ELF executable, a compiled version of theexploid2.c exploit which was publicly released around July of 2010. The exploit works by using hotplug to execute a shell as root. To actually cause hotplug to run,exploid will modify the state of the WiFi adapter and then return it to the original state. Other public versions of this exploit instructed the user to perform that action manually.

The installSu() method makes use of a file named profile in the APK's assets directory, which is another ARM ELF executable which just calls setguid(0),setuid(0), then execv("/system/bin/sh") - a classic root shell. This gets installed as /system/bin/profile with 04755 permissions - giving it the ability to run any command as root.

The adb exploit is a bit more complicated. It's executed by thecom.android.root.adbRoot class and the actual exploit is the compiled C in therageagainstthecage asset. This exploit is also known as CVE-2010-EASY, and uses a resource exhaustion attack against the Android debug bridge process, adb. adbinitially runs as root, but drops privileges with setuid(). However, the return value from setuid() isn't checked. Since setuid() fails when the target user is over theRLIMIT_NPROC value, adb will continue running as root if the user's process limit is maxed out. The rageagainstthecage exploit first determines the RLIMIT_NPROCvalue and then creates enough processes to reach the limit. When the limit is reached, a single process is killed and adb is restarted to take its place.

The adbRoot class makes use of thejackpal.AndroidTerm library, also packaged with the modified APK, to communicate with the rageagainstthecage exploit.

Phoning home

Before the root exploits are attempted, the malware starts a thread to make an HTTP post to a remote server. The information is formatted as XML and looks like this:

          1.0     0            %s       %s       %s       %s       %s         

The interesting values here are IMEI (International Mobile Equipment Identification) which identifies the physical phone handset, and IMSI (International Mobile Subscriber Identification), which identifies the SIM card in use in the phone. These values are unique identifiers for your phone and tracking them allows the malware controllers to determine exactly how many devices have been compromised.

Dropping more malware

At this point, if the malware has root access, it can do anything it wants to the phone. The last thing that the com.android.root.Settings class does before terminating is install another APK package that's included in the infected APK:

This code first checks to see if the com.android.providers.downloadsmanagerpackage is installed. If not, it copies the sqllite.db file - in the assets folder of the APK - to /system/app/DownloadProvidersManager.apk. We haven't yet fully analyzed this app yet, but it appears to have the ability to download and install other apps on an infected phone. The included AndroidManifest.xml file configures theDownloadCompleteReceiver class to run as soon as the phone boots up (using theandroid.intent.action.BOOT_COMPLETED intent) as well as when the phone state changes – for example when an incoming call is detected or a phone call is initiated (using the android.intent.action.PHONE_STATE intent).

Conclusion

According to Symantec, there were 52 total infected apps published to the Android Market: 21 by kingmall2010, 21 by myournet, and 10 by we20090202. They also claim that there were anywhere from 50,000 to 200,000 downloads of infected apps before they were pulled from the Android market. While the infected apps are now gone from the market, any infected phone is still potentially compromised.

A piece of malware with root access to a phone can read any data stored on it and transmit it anywhere. This includes contact information, documents, and even stored account passwords. With root access it's possible to install components that aren't visible from the phone's user interface and can't be easily removed. For this reason, any compromised phone should be reset to it's factory default state - in some cases this may require a trip back to the phone store.

Last year at SummerCon, Jon Oberheide demonstrated how easy it is to trick users into installing useless and fake applications that can download external malicious components. It's not necessary for apps in the market to contain malicious code embedded in them. So while this particular piece of malware was detected quite quickly, within a matter of days, it is possible for rogue apps to be stealthier.

Security conscious consumers should be wary of the apps they install on their phone and only install apps from reputable publishers though the official Android Market. Here are some things to look for to identify an app from a reputable publisher:

  • The publisher has a website with contact information
  • The app is from the official and original publisher (i.e., if you are installing Angry Birds, ensure it's from Rovio Mobile Ltd)
  • App has a high number of downloads, ratings, and positive reviews


human aims could be good business

It's been almost two years since Google announced a philosophy shift at Google.org to focus more on attacking "problems in ways that make the most of Google's strengths in technology and information,". One of the first successes from that shift--Google Earth Engine--may not only help developing countries get accurate data about their environments for the first time, but such a massive collection of information and sophisticated analysis could pay financial dividends as well.
Google does a lot of charitable giving, but tucked away in a corner of its sprawling campus is a group drawn from all parts of the company that is dedicated to something a little more Googly that simply giving money away: "Can we use our engineering skills to design our way out (of the world's problems)?" Megan Smith, general manager of Google.org, said in an interview with CNET.
Take Google Earth Engine, conceived and run by Rebecca Moore, a former member of Google's Geo team who now works for Google.org full time. Moore developed quite a reputation in the environmental community after using Google Earth to map out a proposed logging project in the Santa Cruz Mountains that was defeated after the graphical presentation showed the project's scope was larger than advertised. That led to Google Earth Outreach, a project which taught environmental groups and governments how to use Google Earth as a presentation tool.
Environmental scientists were impressed by the tool, but what they really wanted was a tool that could let them analyze and manipulate the data stored in those images in order to make decisions about environmental policy, such as how much to compensate local groups for protecting forests against logging. Moore recognized that what they needed was something "intrinsically parallelizable;" in other words, something perfectly suited to be broken up into thousands of small tasks and run across a distributed network of servers.

Google.org wants to find hard problems that are often too much for poorer countries with limited or nonexistent IT budgets to solve on their own and apply Google's vast resources of computing power and human talent.
Around 100 Google employees are affiliated with Google.org, and while their salaries are paid out of Google.org's estimated 2011 budget of $45 million, they generally maintain a strong connection to the Google.com working group from which they came.
Earth Engine is an example of a "pilot" project started by one or two engineers from the Geo team that grew into a full-blown Google.org project, Smith said. There are five major products at the moment: Google Earth Engine, Google Flu Trends, Google PowerMeter, REAnother Google Earth Engine project allowed the Surui tribe in the Amazon to receive compensation from the Brazilian government for maintaining the forests in their territory, the green area in the middle of the picture with the clear borders. The yellow and pink areas represent deforested land.
Another Google Earth Engine project allowed the Surui tribe in the Amazon to receive compensation from the Brazilian government for maintaining the forests in their territory, the green area in the middle of the picture with the clear borders. The yellow and pink areas represent deforested land.
(Credit: Google)

Amazon CloudFront Video Streaming

Amazon CloudFront, the easy-to-use content delivery service, now supports the ability to stream audio and video files. Traditionally, world-class streaming has been out of reach of for many customers – running streaming servers was technically complex, and customers had to negotiate long- term contracts with minimum commitments in order to have access to the global streaming infrastructure needed to give high performance.

Amazon CloudFront designed to make streaming accessible for anyone with media content. Streaming with Amazon CloudFront is exceptionally easy: with only a few clicks on the AWS Management Console or a simple API call, you’ll be able to stream your content using a world-wide network of edge locations running Adobe’s Flash® Media Server. And, like all AWS services, Amazon CloudFront streaming requires no up-front commitments or long-term contracts. There are no additional charges for streaming with Amazon CloudFront; you simply pay normal rates for the data that you transfer using the service.

CIA Software Developer Goes Open Source

Burton, a former Defense Intelligence Agency analyst and software developer, speaks today at the Military Open Source Software Working Group in Virginia. It’s a gathering of 80 or so national security tech-types who’ve heard a thousand stories about good ideas and good code getting sunk, because of squabbles over who owns the software.
Burton, for example, spent years on what should’ve been a straightforward project. Some CIA analysts work with a tool, “Analysis of Competing Hypotheses,” to tease out what evidence supports (or, mostly, disproves) their theories. But the Java-based software is single-user — so there’s no ability to share theories, or add in dissenting views. Burton, working on behalf of a Washington-area consulting firm with deep ties to the CIA, helped build on spec a collaborative version of ACH. He tried it out, using the JonBenet Ramsey murder case as a test. Burton tested 51 clues — the lack of a scream, evidence of bed-wetting — against five possible culprits. “I went in, totally convinced it all pointed to the mom,” Burton says. “Turns out, that wasn’t right at all.”
The program was supposed to work with Analytic Space, an online workspace for spooks. No one could come up with A-Space’s proprietary development specifications. Then came the problem of figuring out ACH’s licensing rights. Progress on the project ground to a halt.
“The Department of Defense spends tens of billions of dollars annually creating software that is rarely reused and difficult to adapt to new threats. Instead, much of this software is allowed to become the property of defense companies, resulting in DoD repeatedly funding the same solutions or, worse, repaying to use previously created software,” writes John M. Scott, a freelance defense consultant and a chief evangelist in the military open source movement. “Imagine if only the manufacturer of a rifle were allowed to clean, fix, modify or upgrade that rifle. This is where the military finds itself: one contractor with a monopoly on the knowledge of a military software system.”
Take Future Combat Systems, the Army’s behemoth program to make itself faster, smarter, and better-networked. One of the many reasons it collapsed: the code at the heart of the system was controlled by a single company, and not even the sub-contractors building gear that was supposed to rely on that code could have access to it.

Google Data Center

Google Encrypted Search Engine

Google has changed the URL of its encrypted search https://encrypted.google.com In past, this service was hosted on https://www.google.com. Many schools and institutes had reported problems with the older encrypted search site as they were unable to stop students and employees from searching filtered items. The encrypted search securely and privately searches the Google’s search index in such a manner that the network administrator can not track or filter search queries.

The organizations, using Google Apps, were also unable to block the encrypted search website, as it would have blocked Google Apps tools. Gmail and many applications of Google Apps are available only with the HTTPS connection.

And today, Google solved this problem by introducing a new separate sub-domain name for the encrypted search, encrypted.google.com. Now, the network administrators can block Google’s SSL search without affecting access to other services.

 

. . . Social Networks . . .

Usage Policies