AdelPlex
this site the web

Was Not Soccer !!



- Preserve the spirit of sportsmanship around the world, while revealing the barbarity of those who seek to destroy the moral fiber of team spirit and spectator sports.

- Correct the carefully crafted campaigns to twist facts and propagate lies, especially those practiced in the qualifying matches leading to the 2010 South Africa World Cup on the part of Algerians.

- Call on civilized people to take a stand in expressing their condemnation to the unprecedented negative behavior of sports violence where soccer stadiums and its surroundings become occupied by hooligans, vigilantes and mobs waving their weapons, assaulting and intimidating fans.

Website: http://www.wasnotsoccer.com

Egypt launches Arabic web domain

Egypt will open the world's first Arabic language internet domain ..

Dr. Tarek Kamel said the new domain name would be ".masr" written in the Arabic alphabet.
It translates as ".Egypt".

"It is a great moment for us... The internet now speaks Arabic," Dr. Kamel said.

Last month, internet regulator Icann voted to allow non-Latin web addresses. Domain names can now be written in Arabic, Chinese and other scripts.

He said the new domain would "offer new avenues for innovation, investment and growth" in the Arabic-speaking world.

Internet Governance Forum, 09

The Internet Governance Forum live Streaming, Sharm El Sheikh .. Egypt

http://mcit.gov.eg/livestreaming.aspx




Jerry Yang, co-founder and former CEO of Yahoo addressing @ IGF ... Yahoooo !!30 minutes ago

Open Source Business Intelligence

Open source business applications have started to mature into robust platforms, serving sales, finance and operational needs. Now, open source business intelligence (OSBI) platforms are also gaining attention, as owners of proprietary BI applications are navigating market consolidation, product roadmap changes and ever-increasing licensing costs.

OSBI platforms are typically marketed as commercial open source software (COSS), similar to the model popularized by Red Hat. COSS companies generate revenue from support, subscriptions and training. Most CIOs feel it’s critical for their applications to have an identified commercial entity standing behind business infrastructure rather than relying on the promise of community alone.




Jaspersoft provides high quality software and conveniently packaged services for Open Source and Professional edition customers. Use this guide to determine which software edition is right for you. Businesses can choose between Open Source or Professional for internal business use. Developers can choose between Open Source or Professional OEM for publicly distributed applications.

http://www.jaspersoft.com/sites/default/files/downloads/Choosing%20the%20Right%20JasperSoft%20BI%20Edition-2009.pdf




Pentaho addresses our reporting and data integration needs, provides tremendous flexibility, and offered far better value than proprietary alternatives. Pentaho was also an attractive partner because of the quality of their team, and the size and activity of the Pentaho community.
Deployment Overview

Key Challenges
* Integrating multiple streams of consumer purchasing information
* Pentaho Reporting Enterprise Edition
* Pentaho Data Integration Enterprise Edition
* Debian Linux, MySQL database

Why Pentaho
* Technology maturity
* Market leadership
* Value vs. proprietary
* Open source advantages - community, openness, standards support

http://demo.pentaho.com/pentaho/Home

Business Intellegence White Papers

The Business Intelligence resource for business and technical professionals covering a wide range of topics including Performance Management, Data Warehouse, Analytics, Data Mining, Reporting, Customer Relationship Management and Balanced Scorecard.

You can download your selection at the following locations:-

http://www.businessintelligence.com/fwp/Defining_Business_Analytics.pdf
http://www.businessintelligence.com/fwp/Uncovering_Insight_Hidden_In_Text.pdf
http://www.businessintelligence.com/fwp/EI_In_Search_Of_Clarity.pdf
http://www.businessintelligence.com/fwp/Making_Business_Relevant_Information.pdf
http://www.businessintelligence.com/fwp/Expanding_BI_Role_By_Including_Predictive_Analytics.pdf
http://www.businessintelligence.com/fwp/All_Information_All_People_One_Platform.pdf
http://www.businessintelligence.com/fwp/Business_Intelligence_Now_More_Than_Ever.pdf
http://www.businessintelligence.com/fwp/Business_Intelligence_Standardization.pdf
http://www.businessintelligence.com/fwp/BI_for_Decision_Makers.pdf
http://www.businessintelligence.com/fwp/Business_Intelligence_The_Definitive_Guide.pdf
http://www.businessintelligence.com/fwp/Leveraging_Solutions.pdf

600 million unique visits per month, Yahoo Open Source Traffic Server

Today, Yahoo moved its open source cloud computing initiatives up a notch with the donation of its Traffic Server product to the Apache Software Foundation. Traffic Server is used in-house at Yahoo to manage its own traffic and it enables session management, authentication, configuration management, load balancing, and routing for entire cloud computing stacks. We asked the cloud computing team at Yahoo for a series of guest posts about Traffic Server, and you'll find the first one here.

Introducing Traffic Server
By The Yahoo Cloud Computing Team


Today, Yahoo is excited to open source Traffic Server software that we rely on extensively. An Apache Incubator project, Traffic Server is an extremely high performance Web proxy-caching server, and has a robust plugin API that allows you to modify and extend its behavior and capabilities.

Traffic Server ships with not only an HTTP web proxy and caching solution, but also provides a server framework, with which you can build very fast servers for other protocols. As an HTTP web proxy, Traffic Server sits between clients and servers and adds services like caching, request routing, filtering, and load balancing. Web sites frequently use a caching server to improve response times by locally storing web pages, web services, or web objects like images, JavaScript, and style sheets, and to relieve the burden of creating these pages/services from their front and back end infrastructure. Corporations and ISPs frequently use forward proxy servers to help protect their users from malicious content, and or speed delivery of commonly requested pages. The Traffic Server code and documentation is available today, and we'll be making a release version in the near future.

Traffic Server is fast. It was designed from the start as a multi-threaded event driven server, and thus scales very well on modern multi-core servers. With a quad core 1.86GHz processor, it can do more than 30,000 requests/second for certain traffic patterns. In contrast, some of the other caching proxy servers we've used max out at around 8,000 requests/second using the same hardware.

It's extensible. It has native support for dynamically loading shared objects that can interact with the core engine. Yahoo! has internal plugins that remap URLs; route requests to different services based on cookies; allow caching of oAuth authenticated requests; and modify behaviors based on Cache-Control header extensions. We've replaced the default memory cache with a plugin. It's even possible to write plugins to handle other protocols like FTP, SMTP, SOCKS, RTSP; or to modify the response body. There is documentation for the plugin APIs, and sample plugin code available today.

Traffic Server is serving more than 30 billion Web objects a day across the Yahoo! network, delivering more than 400 terabytes of data per day. It's in use as a proxy or cache (or both) by services like the Yahoo! Front Page, Mail, Sports, Search, News, and Finance. We continue to find new uses for Traffic Server, and it gets more and more ingrained into our infrastructure each day.

At its heart, Traffic Server is a general-purpose implementation that can be used to proxy and cache a variety of workloads, from single site acceleration to CDN deployment and very large ISP proxy caching. It has all the major features you'd expect from such a server, including behavior like cache partitioning. You can dedicate different cache volumes to selected origin servers, allowing you to serve multiple sites from the same cache without worrying about one of them being "pushed" out of the cache by the others.

The current version of Traffic Server is the product of literally hundreds of developer-years. It originated in Inktomi as the Inktomi Traffic Server, and was successfully sold commercially for several years. Chuck Neerdaels, one of the co-authors of Harvest, which became the popular open source Squid proxy caching server, has been integral in Traffic Server's history, managing the early development team, and leading the group today. Yahoo! acquired Inktomi in 2003, and has a full time development team working on the server. We plan to continue active development. For example, we are planning to add support for IPv6 and 64bit, and improve its performance when dealing with very large files. We'd love to work with the community on these and other efforts.

Of course, the server is neither perfect nor complete. Internally, Yahoo! uses Squid for some caching use cases where we need more fine-grained cache controls like refresh_patterns, stale-if-error, and stale-while-revalidate. By open sourcing, you the community can help add the features you need more quickly than Yahoo! can by itself. In exchange, the public gets access to a server that Yahoo! has found incredibly useful to speed page downloads and save back-end resources through caching.

As an Apache Incubator project, we hope to graduate to a full Apache top level project. We chose the Apache Software Foundation because of our experience with the Hadoop project; its great infrastructure to support long running projects; and its long history of delivering enterprise class; free software that supports large communities of users and developers alike.

Over the next few weeks, look for more detailed posts on plugins; how to get started with using the code; and more details on the roadmap and how to get involved in the project. In the meantime, grab the source; browse the documentation; send feedback; and help make the project even better

Matrix Runs over Microsoft Windows

Google’s First Production Server


Google’s First Production Server ... with the hair pulled back, revealing a rack of cheap networked PCs, circa 1999.

Each level has a couple of PC boards slammed in there, partially overlapping. This approach reflects a presumption of rapid obsolescence of cheap hardware, which would not need to be repaired. Several of the PCs never worked, and the system design optimized around multiple computer failures.

According to Larry and Sergey, the beta system used Duplo blocks for the chassis because generic brand plastic blocks were not rigid enough.

Original hardware

The original hardware (ca. 1998) that was used by Google when it was located at Stanford University, included:

  • Sun Ultra II with dual 200 MHz processors, and 256MB of RAM. This was the main machine for the original Backrub system.
  • 2 x 300 MHz Dual Pentium II Servers donated by Intel, they included 512MB of RAM and 9 x 9GB hard drives between the two. It was on these that the main search run.
  • F50 IBM RS/6000 donated by IBM, included 4 processors, 512MB of memory and 8 x 9GB hard drives.
  • Two additional boxes included 3 x 9GB hard drives and 6 x 4GB hard drives respectively (the original storage for Backrub). These were attached to the Sun Ultra II.
  • IBM disk expansion box with another 8 x 9GB hard drives donated by IBM.
  • Homemade disk box which contained 10 x 9GB SCSI hard drives.

Google's server infrastructure is divided in several types, each assigned to a different purpose:

  • Google load balancers take the client request and forward it to one of the Google Web Servers via Squid proxy servers.
  • Squid proxy servers take the client request from load balancers and return the result if present in local cache otherwise forward it to Google Web Server.
  • Google web servers coordinate the execution of queries sent by users, then format the result into an HTML page. The execution consists of sending queries to index servers, merging the results, computing their rank, retrieving a summary for each hit (using the document server), asking for suggestions from the spelling servers, and finally getting a list of advertisements from the ad server.
  • Data-gathering servers are permanently dedicated to spidering the Web. Google's web crawler is known as GoogleBot. They update the index and document databases and apply Google's algorithms to assign ranks to pages.
  • Each index server contains a set of index shards. They return a list of document IDs ("docid"), such that documents corresponding to a certain docid contain the query word. These servers need less disk space, but suffer the greatest CPU workload.
  • Document servers store documents. Each document is stored on dozens of document servers. When performing a search, a document server returns a summary for the document based on query words. They can also fetch the complete document when asked. These servers need more disk space.

The 2009 Linux Kernel Summit Report

The 2009 Linux Kernel Summit was held in Tokyo, Japan on October 19 and 20. Jet-lagged developers from all over the world discussed a wide range of topics.


The sessions held on the first day of the summit were:

  • Mini-summit readouts; reports from various mini-summit meetings which have happened over the last six months.

  • The state of the scheduler, the kernel subsystem that everybody loves to complain about.

  • The end-user panel, wherein Linux users from the enterprise and embedded sectors talk about how Linux could serve them better.

  • Regressions. Nobody likes them; are the kernel developers doing better at avoiding and fixing them?

  • The future of perf events; a discussion of where this new subsystem is likely to go next.

  • LKML volume and related issues. A session slot set aside for lightning talks was really mostly concerned with the linux-kernel mailing list and those who post there.

  • Generic device trees. The device tree abstraction has proved helpful in the creation of generic kernels for embedded hardware. This session talked about what a device tree is and why it's useful.

The discussions on the second day were:

  • Legal issues; a lawyer visits the summit to talk about the software patent threat and how to respond to it.

  • How Google uses Linux: the challenges faced by one of our largest and most secretive users.

  • : is the kernel getting slower? How do we know and where are the problems coming from?

  • Realtime: issues related to the merging of the realtime preemption tree into the mainline.

  • Generic architecture support: making it easier to port Linux to new processor architectures.

  • Development process issues, including linux-next, staging, merge window rules, and more.

[Joker] The kernel summit closed with a general feeling that the discussions had gone well. It was also noted that our Japanese hosts had done an exceptional job in supporting the summit and enabling everything to happen; it would not be surprising to see developers agitating for the summit to return to Japan in the near future.

Gerrit: Google-style code review meets git

Gerrit, a Git-based system for managing code review, is helping to spread the popular distributed revision control system into Android-using companies, many of which have heavy quality assurance, management, and legal processes around software. HTC, Qualcomm, TI, Sony Ericsson, and Android originator Google are all running Gerrit, project leader Shawn Pearce said in a talk at the October 2009 GitTogether event, hosted at Google in Mountain View. Click below (subscribers only) for the full report by Don Marti.

How Google uses Linux

There may be no single organization which runs more Linux systems than Google. But the kernel development community knows little about how Google uses Linux and what sort of problems are encountered there. Google's Mike Waychison traveled to Tokyo to help shed some light on this situation; the result was an interesting view on what it takes to run Linux in this extremely demanding setting.

Mike started the talk by giving the developers a good laugh: it seems that Google manages its kernel code with Perforce. He apologized for that. There is a single tree that all developers commit to. About every 17 months, Google rebases its work to a current mainline release; what follows is a long struggle to make everything work again. Once that's done, internal "feature" releases happen about every six months.

This way of doing things is far from ideal; it means that Google lags far behind the mainline and has a hard time talking with the kernel development community about its problems.

There are about 30 engineers working on Google's kernel. Currently they tend to check their changes into the tree, then forget about them for the next 18 months. This leads to some real maintenance issues; developers often have little idea of what's actually in Google's tree until it breaks.

And there's a lot in that tree. Google started with the 2.4.18 kernel - but they patched over 2000 files, inserting 492,000 lines of code. Among other things, they backported 64-bit support into that kernel. Eventually they moved to 2.6.11, primarily because they needed SATA support. A 2.6.18-based kernel followed, and they are now working on preparing a 2.6.26-based kernel for deployment in the near future. They are currently carrying 1208 patches to 2.6.26, inserting almost 300,000 lines of code. Roughly 25% of those patches, Mike estimates, are backports of newer features.

There are plans to change all of this; Google's kernel group is trying to get to a point where they can work better with the kernel community. They're moving to git for source code management, and developers will maintain their changes in their own trees. Those trees will be rebased to mainline kernel releases every quarter; that should, it is hoped, motivate developers to make their code more maintainable and more closely aligned with the upstream kernel.

Linus asked: why aren't these patches upstream? Is it because Google is embarrassed by them, or is it secret stuff that they don't want to disclose, or is it a matter of internal process problems? The answer was simply "yes." Some of this code is ugly stuff which has been carried forward from the 2.4.18 kernel. There are also doubts internally about how much of this stuff will be actually useful to the rest of the world. But, perhaps, maybe about half of this code could be upstreamed eventually.

As much as 3/4 of Google's code consists of changes to the core kernel; device support is a relatively small part of the total.

Google has a number of "pain points" which make working with the community harder. Keeping up with the upstream kernel is hard - it simply moves too fast. There is also a real problem with developers posting a patch, then being asked to rework it in a way which turns it into a much larger project. Alan Cox had a simple response to that one: people will always ask for more, but sometimes the right thing to do is to simply tell them "no."

In the area of CPU scheduling, Google found the move to the completely fair scheduler to be painful. In fact, it was such a problem that they finally forward-ported the old O(1) scheduler and can run it in 2.6.26. Changes in the semantics of sched_yield() created grief, especially with the user-space locking that Google uses. High-priority threads can make a mess of load balancing, even if they run for very short periods of time. And load balancing matters: Google runs something like 5000 threads on systems with 16-32 cores.

On the memory management side, newer kernels changed the management of dirty bits, leading to overly aggressive writeout. The system could easily get into a situation where lots of small I/O operations generated by kswapd would fill the request queues, starving other writeback; this particular problem should be fixed by the per-BDI writeback changes in 2.6.32.

As noted above, Google runs systems with lots of threads - not an uncommon mode of operation in general. One thing they found is that sending signals to a large thread group can lead to a lot of run queue lock contention. They also have trouble with contention for the mmap_sem semaphore; one sleeping reader can block a writer which, in turn, blocks other readers, bringing the whole thing to a halt. The kernel needs to be fixed to not wait for I/O with that semaphore held.

Google makes a lot of use of the out-of-memory (OOM) killer to pare back overloaded systems. That can create trouble, though, when processes holding mutexes encounter the OOM killer. Mike wonders why the kernel tries so hard, rather than just failing allocation requests when memory gets too tight.

So what is Google doing with all that code in the kernel? They try very hard to get the most out of every machine they have, so they cram a lot of work onto each. This work is segmented into three classes: "latency sensitive," which gets short-term resource guarantees, "production batch" which has guarantees over longer periods, and "best effort" which gets no guarantees at all. This separation of classes is done partly through the separation of each machine into a large number of fake "NUMA nodes." Specific jobs are then assigned to one or more of those nodes. One thing added by Google is "NUMA-aware VFS LRUs" - virtual memory management which focuses on specific NUMA nodes. Nick Piggin remarked that he has been working on something like that and would have liked to have seen Google's code.

There is a special SCHED_GIDLE scheduling class which is a truly idle class; if there is no spare CPU available, jobs in that class will not run at all. To avoid priority inversion problems, SCHED_GIDLE processes have their priority temporarily increased whenever they sleep in the kernel (but not if they are preempted in user space). Networking is managed with the HTB queueing discipline, augmented with a bunch of bandwidth control logic. For disks, they are working on proportional I/O scheduling.

Beyond that, a lot of Google's code is there for monitoring. They monitor all disk and network traffic, record it, and use it for analyzing their operations later on. Hooks have been added to let them associate all disk I/O back to applications - including asynchronous writeback I/O. Mike was asked if they could use tracepoints for this task; the answer was "yes," but, naturally enough, Google is using its own scheme now.

Google has a lot of important goals for 2010; they include:

  • They are excited about CPU limits; these are intended to give priority access to latency-sensitive tasks while still keeping those tasks from taking over the system entirely.

  • RPC-aware CPU scheduling; this involves inspection of incoming RPC traffic to determine which process will wake up in response and how important that wakeup is.

  • A related initiative is delayed scheduling. For most threads, latency is not all that important. But the kernel tries to run them immediately when RPC messages come in; these messages tend not to be evenly distributed across CPUs, leading to serious load balancing problems. So threads can be tagged for delayed scheduling; when a wakeup arrives, they are not immediately put onto the run queue. Instead, the wait until the next global load balancing operation before becoming truly runnable.

  • Idle cycle injection: high-bandwidth power management so they can run their machines right on the edge of melting down - but not beyond.

  • Better memory controllers are on the list, including accounting for kernel memory use.

  • "Offline memory." Mike noted that it is increasingly hard to buy memory which actually works, especially if you want to go cheap. So they need to be able to set bad pages aside. The HWPOISON work may help them in this area.

  • They need dynamic huge pages, which can be assembled and broken down on demand.

  • On the networking side, there is a desire to improve support for receive-side scaling - directing incoming traffic to specific queues. They need to be able to account for software interrupt time and attribute it to specific tasks - networking processing can often involve large amounts of softirq processing. They've been working on better congestion control; the algorithms they have come up with are "not Internet safe" but work well in the data center. And "TCP pacing" slows down outgoing traffic to avoid overloading switches.

  • For storage, there is a lot of interest in reducing block-layer overhead so it can keep up with high-speed flash. Using flash for disk acceleration in the block layer is on the list. They're looking at in-kernel flash translation layers, though it was suggested that it might be better to handle that logic directly in the filesystem.

Mike concluded with a couple of "interesting problems." One of those is that Google would like a way to pin filesystem metadata in memory. The problem here is being able to bound the time required to service I/O requests. The time required to read a block from disk is known, but if the relevant metadata is not in memory, more than one disk I/O operation may be required. That slows things down in undesirable ways. Google is currently getting around this by reading file data directly from raw disk devices in user space, but they would like to stop doing that.

The other problem was lowering the system call overhead for providing caching advice (with fadvise()) to the kernel. It's not clear exactly what the problem was here.

All told, it was seen as one of the more successful sessions, with the kernel community learning a lot about one of its biggest customers. If Google's plans to become more community-oriented come to fruition, the result should be a better kernel for all.

Barack Obama proves the power of Open Source

It would be a bit of a stretch to claim that Barack Obama won the 2008 election because his website ran open source software while John McCain's ran on proprietary software. But what is not a stretch at all is that Barack Obama's campaign built a powerful synergy between grass-roots politics and grass-roots technology, while presenting what many consider to be the most disciplined campaign of any candidate in modern history.

For those who'd like to do their homework and understand Castellanos's sources, the book he references is The Cathedral and the Bazaar. The connection between the failing industrial model practiced by companies like Microsoft compared with the organic open source development model is detailed in a whitepaper I published in 2006 titled Software Industry vs. Software Society: Who Wins in 2020?. Who knew that we'd only have to wait two more years before the logic that paper presents would become a mainstream explanation of a mainstream shift in American culture, identity, politics, and economic potential?

But Open Source has much more to deliver to this President and to the nation, in terms of reforming Washington and our Federal government. One of the strongest criticisms made against Barack Obama during his campaign is that he consistently said that he would go through the Federal budget "line-by-line" and cut wasteful spending, but he never gave any specifics. The open source-based application http://USAspending.gov was implemented after Congress passed a law in 2006 saying that by the start of 2008, every government contract for every government agency (except those that are classified) had to be online, with information disclosing costs, sponsors, contractors, etc. By using open source software and an open source-friendly governance model, the program was delivered ahead of schedule and under budget. And everybody in the world can now inspect the Federal budget on a line-by-line basis.


Ubuntu 9.10 is here ..

Free Operating System for your desktop or laptop

download Ubuntu


Free software

For Ubuntu, the 'free' in 'free software' is used primarily in reference to freedom, and not to price - although we are committed to not charging for Ubuntu. The most important thing about Ubuntu is that it confers rights of software freedom on the people who install and use it. It is these freedoms that enable the Ubuntu community to grow, continue to share its collective experience and expertise to improve Ubuntu and make it suitable for use in new countries and new industries.

Quoting the Free Software Foundation's 'What is Free Software', the freedoms at the core of free software are defined as:

  • The freedom to run the programme, for any purpose.
  • The freedom to study how the programme works and adapt it to your needs.
  • The freedom to redistribute copies so you can help others.
  • The freedom to improve the programme and release your improvements to the public, so that everyone benefits.

Open source
Open source is a term coined in 1998 to remove the ambiguity in the English word 'free'. The Open Source Initiative described open source software in the Open Source Definition. Open source continues to enjoy growing success and wide recognition.

Ubuntu is happy to call itself open source. While some refer to free and open source as competing movements with different ends, we do not see free and open source software as either distinct or incompatible. Ubuntu proudly includes members who identify with both movements.

Whitehouse.gov switch to Drupal

The new media team at the White House announced via the Associated Press that whitehouse.gov is now running on Drupal, the open source content management system. That Drupal implementation is in turn running on a Red Hat Linux system with Apache, MySQL and the rest of the LAMP stack. Apache Solr is the new White House search engine.

This move is obviously a big win for The OpenSource Community.

WASHINGTON — A programming overhaul of the White House's Web site has set the tech world abuzz. For low-techies, it's a snooze – you won't notice a thing.

The online-savvy administration on Saturday switched to open-source code for – meaning the programming language is written in public view, available for public use and able for people to edit. http://www.whitehouse.gov

"We now have a technology platform to get more and more voices on the site," White House new media director Macon Phillips told The Associated Press hours before the new site went live on Saturday. "This is state-of-the-art technology and the government is a participant in it."

White House officials described the change as similar to rebuilding the foundation of a building without changing the street-level appearance of the facade. It was expected to make the White House site more secure – and the same could be true for other administration sites in the future.

"Security is fundamentally built into the development process because the community is made up of people from all across the world, and they look at the source code from the very start of the process until it's deployed and after," said Terri Molini of Open Source for America, an interest group that has pushed for more such programs.

Having the public write code may seem like a security risk, but it's just the opposite, experts inside and outside the government argued. Because programmers collaborate to find errors or opportunities to exploit Web code, the final product is therefore more secure.

FOSS Network Security

Sure, Symantec, MessageLabs, SonicWall, Cisco, Juniper and other big names appear to have the market sewn up. Yet most IT Managers are far too busy or strapped for cash to investigate all the options and demo a range of expensive options. Yet it is a sad situation for any company to expose its network and users to risk due to price – and network security isn’t an island; in this connected world any organisation’s lack of protection could well mean be the key to the next DDoS attack against your domain let alone the relentless flood of spam and viruses.

Open source software is an excellent choice; price is not an issue and, at worst, there’s no obligation to continue with an open source product if you find it does not in fact meet your needs; there’s nothing worse than continuing with an ineffectual product just because you feel committed due to the size of the cheque you wrote.

However, often businesses are dubious of the merits of free software and a sense of fear that such products will be abandoned or lacking in support. Yet there are many robust and stable free open source products available which are used extensively and that come with literally a worldwide community of fellow implementers. The most-used web server in the world, Apache, is a striking FOSS example.

Obviously, there is a lot of software out there. This makes it hard for a company to know where to begin, and how to separate what’s good from what’s not quite so good. We’d like to help: here are some excellent security applications which every business should put on their list to consider. Each one has a wide user base, a load of support and is proven and robust.

SNORT
ClamAV
SpamAssassin
L7 Filter
OpenVPN

CLAM AV

It’s a sad truth that all organizations need an anti-virus solution. Now, let’s clarify one thing: yes, Microsoft Windows is far more predisposed to virus problems than other operating systems but this does not negate the need for Linux shops to scan also.

It’s entirely possible for a Linux server to be hosting virus-infected files, whether as e-mail attachments or stored files on a Samba share or something else. Now, these will not harm Linux or its users, but it would be a terrible crime against the Internet as a whole to be ignorantly passing viruses on.

Happily, no matter the platform, there’s a FOSS solution for you: ClamAV is a well-regarded anti-virus scanning engine which has a flexible range of configurations. A Windows version is also available so no matter your mix of server and desktop platforms you have seamless protection.

SpamAssassin

Spam is a plague on the Internet. It’s constant and unending. Some ISPs and free web mail providers offer anti-spam services but if you’re running your own mail server, accepting mail to your own domain, you have to work out your own solution.

First, some commercial pricing: want Symantec Mail Security? Depending on the number of licenses you buy you’re looking at $AUD 32 per mailbox to cull out virus e-mails; it’s then another $AUD 31 per mailbox to license the anti-spam addon; without this Symantec mail security will leave spam alone. You can’t buy it by itself. For a company with 100 mailboxes, you’re looking at $AUD 6,300 to keep spam out.

And be wary of upgrading; Symantec Mail Security version 6 supports Microsoft Exchange 2007, but the product came out after Exchange 2007; early adopters of Microsoft’s mail platform had no working Symantec option at the time of purchase.

This highlights a problem with proprietary solutions. They are locked in to specific technologies. There’s not likely to be another version of Exchange for three or four years so the company above won’t soon face any problem again with having an unsupported version, but consider that they may opt to change their mail platform in the interim – over to Lotus Domino, say. Now their Symantec Mail Security for Exchange offers no protection at all; they will need to purchase a different version which supports the new platform.

This isn’t a Symantec bash, by any means; indeed, Symantec offer good products (while their Norton line is bloated and burdensome) and they are far from alone in being a provider of commercial, licensed, anti-spam products that specifically tie in to a targeted mail server.
One philosophy that the free and open source movement has espoused, whether intentionally or not, is to promote standards-based software and this is the key to platform-free anti-spam.

Before we get onto that, a case in point about standards is the very argument used against the Firefox web browser, namely that users can find sites which do not load properly under Firefox but yet work within Internet Explorer. The reason is invariably the site has been either badly designed or it has been designed to specifically target Internet Explorer. Firefox prides itself on a very strict and complete implementation of Web Consortium standards; by contrast Internet Explorer is more accepting of coding flaws.

On the surface that sounds helpful but it is not; bad CSS renders despite incorrect or missing quotes or brackets. The web page author is not aware of problems and has no motivation to fix them. However, a growing proportion of Internet users will be unable to see their page as it was intended to be displayed – unless every other browser also receives programming modifications allowing them to tolerate the exact same flaws IE will, and with the exact same results.

Non-standard Internet Explorer extensions do nothing to help either; pages exploiting such are never going to be viewable as intended in other browsers because of the pervasive mantra within FOSS providers that standards be upheld and adhered to.
So, returning to our problem at hand, the standard for e-mail delivery is SMTP, the simple mail transfer protocol. All mail servers, no matter who the vendor, implement SMTP according to a defined and well-accepted standard known simply as “RFC 821”, which stood for almost twenty years before being updated in the form of RFC 2821.

Microsoft Exchange implements SMTP to receive mail. So does Lotus Domino. So does Sendmail. Consequently, a non-proprietary anti-spam solution can wedge itself between the Internet and your mail server, providing its own SMTP implementation and accepting mail in the first instance. It will cleanse the incoming mail of spam and then pass it on to your real mail server, giving a pure flow of clean e-mail. There’s a secondary advantage too; your e-mail server becomes one step removed from the public Internet.

Now, I have to be fair; this idea isn’t the sole domain of the open source world. Indeed, it’s the model used by services like MessageLabs who provide just this very service themselves. Your e-mail goes straight to them, they cleanse it and pass it on. It’s pure SMTP, with no regard for your operating system or mail server environment.

Yet, these services are businesses; they’re going to charge for what they do. That’s fair enough, but it can easily become depressing when your users say “We have too much spam; you must do something” but the solutions are all like taxi meters, ticking over – “you have how many mailboxes?” they ask. “How many domains do you have?”, “How many e-mail messages do you receive a day?” Every question adds up; every single item adds up.

Can open source compete? Pleasantly, the answer is definitely yes – with some impressively mature and stable products available right now. It doesn’t matter if you’re a Windows or Linux user, it doesn’t matter which mail environment you run.

Layer 7 Filtering

L7 Filter is a nice, but perhaps little known, SourceForge project which provides an add-in module for iptables, the Linux firewall product. This obviously means it requires a Linux firewall be on your network for L7 Filter to be of use to you.

L7 Filter makes it possible to detect and prevent a range of network protocols which would otherwise be difficult to detect because they work over a number of different ports and aren’t limited to just one.
An example: companies often want to block BitTorrent applications which may be running on any of a range of different ports. Or, they might want to block MSN Messenger or other instant messaging applications; now these do usually a fixed port but can possibly switch to other ports including the web port, port 80, becoming burdensome for administrators to stop outright.

OPEN VPN

No matter the size of your company, at some point you’ll have users who want to get into the network from home or out on the road whether it be to check mail, work on the database, put together a tender or proposal or any of a million other reasons.

This is where VPN – virtual private network – products come in. Yet, many VPN clients from the likes of Cisco, Nortel and Juniper and other companies of their ilk do not co-operate with each other, or are limited in the operating systems they run on, or generally put up other obstacles.

Instead, OpenVPN is an easily-configured, easily managed VPN tool which works effectively and is available for Windows, Linux and Mac computers. OpenVPN is quicker to set up than IPSec and PPTP and is both cost-effective and stable.

SNORT IDS

Snort, with its funny name, has three primary operating modes. The first two are not really intrusion related and merely reads network packets received and displays them on screen or to disk. In these modes, Snort acts as a network sniffer and packet logger. These in themselves can be useful applications, but is not where Snort really shows its stuff.

Snort’s third operating mode – network intrusion detection – is when the magic happens. Here, Snort actually pays attention to the network traffic passing its electronic eyes and matches what it sees according to a database of updatable signatures as well as any custom user-defined rules. In this mode, Snort does for networks what anti-virus tools do for filesystems.

What’s best is it still runs when you’re asleep, processing packets, log files and more. Actually, you can configure it to send alerts via SMS or other means that can even wake up your network or security staff. Or, you could define rules so Snort blocks the suspicious traffic as well as other traffic from the originating host.

Where Snort isn’t so great is the massive amounts of disk space it chews up with the log files it produces as well as the signature files used to detect rule violations. It’s not unrealistic that Snort operating within a high-traffic site could consume up to 100Gb of disk space. Snort doesn’t especially require any particular level of processor but it really will need a fast disk controller and a lot of space – let alone a network card that is as fast as or faster than the rest of your network (or else you can miss packets.) If the budget can cater for it, really, the best advice would even be to dedicate a machine directly to Snort’s use.

Wherever you choose to run Snort, you do have to remember to place it on your network in a strategic location, because it can only see traffic on its own subnet. There’s little point running Snort on your office desktop computer if your public-facing web and mail servers are housed in a co-location facility, for instance. In fact, depending on the complexity and size of your network, you may want to consider multiple Snort installations, to ensure all your key assets are protected by having one Snort system within each key subnet

Foss @ Mobile

According to this story on Marketwire, there has been dramatic growth in FOSS applications for mobile devices. The story cites a study by Black Duck Software, a company that helps businesses develop software that contains FOSS components. It spiders the web looking for FOSS applications and collects downloadable apps into a FOSS repository.

It found around 2300 FOSS applications for mobile devices. A summary of its results over the fold.

Platform ......... Total Project .......... Releases Project
------------------------------------------------------------------------
Palm ....... .................1,850 .............. ........ 113 ........
iPhone .......................391 ............... .......... 266 ........
Windows Mobile ....... 359............ ........... 174 ........
Symbian ................... 322.............. ............. 64 ........
Android .....................246 ........... .............. 191 ........
Rim/Blackberry ......... 237 .............. .......... 96 ........
Maemo ...................... 56 ............. .............. 17 ........
LiMo .......................... 28 .................. ...........6 ........

Although Palm leads in total apps, more apps were produced for Google’s Android platform in 2008. That’s a good sign. Indeed, given Android’s relatively late arrival compared to Rim/Blackberry and iPhone, it’s catching up quickly. With 246 apps, it has surpassed Rim/Blackberry (237) but trails iPhone (291). I’m guessing it’s going to jump in the lead by next year at this time

Open Music Source

Open Music Source is a web service-based music distribution network, backed by a community of music platforms, labels, bands, listeners and coders.

/************************************************************************
*
* OMS Web Service Functions 1.4
* -----------------------------
*
* Copyright: (C) 2007 moving primates GmbH
* Function: GetArtist
* Description: Gets the basic artist data of one artist.
* Change Date: 12/Jun/2007
*
*************************************************************************
*
* REQUIRED ELEMENTS:
*
* The following elements must be send with the request:
*
* - ArtistID
* - Fieldlist
*
* FIELD ELEMTNES:
*
* The following are Field elements which define what you get back from
* the Web Service function. They must be named "Field", their value
* must be the exact name of the following enumeration and they must be
* created as a child element of "Fieldlist".
*
* Field elements are optional, however it is required that you
* create at least one; their send order is irrelevant.
*
* - ArtistName
* - ArtistURL
* - Website
* - MusicalInfluence
* - SoundsLike
* - ArtistPictureExists
* - FirstOnlineDate
* - PostalCode
* - City
* - CountryID
* - CountryCode
* - CountryName
* - ArtistGenreID
* - ArtistGenreParentID
* - ArtistGenreTopParentID
* - ArtistGenreName
* - ArtistParentGenreName
* - ArtistTopParentGenreName
*
*************************************************************************

Digital Audio Linux Workstations

Ardour: professional-grade multitrackmultichannel hard-disk recording for Linux and OSX
Audacity: cross-platform soundfile editor with many nice features
Cue Station: software for programming the http:www.lcsaudio.com
EcaEnveloptor: GUI to create envelopes for anything in ecasound
Ecmd: Joel Roth's PerlTk-based front-end forecasound
Frinika: music workstation software for operating systems running Java
GAS: Luke Tindall's GUI front-end for ecasound
GNUsound: sound editor with support for multiple tracks and 8, 16, or 24/32-bit samples
Jokosher: multi-track non-linear audio editor with usability in mind
KHdRecord: Peter Jodda's handy direct-to-disk audio recorder
MixMagic: audio mixing program for GNOME, handles large soundfiles
ProTux: hard-disk recording and audio processing suite
QRT: Doug Scott's QT port of Paul Lansky's Rt realtime soundfile mixer
Qtractor: An audio/MIDI sequencer
SLab: full-featured hard-disk recorder, will record up to 64 tracks
SLabio: slabin and slabout are paired programs that provide command-line access to SLab data files
Simple Multitrack: "a minimalistic command line audio recorder", from Kurt Rosenfeld
TkEca: Luis Gasparotto's handy TclTk interface for ecasound
Traverso: multitrack audio recording and editing program from Remon Sijrier
Visecas: a GTK-based front-end for ecasound, from Jan Weil
WaveMixer: easy-to-use multitrack wave editor, from Raoul
Wired: audio/MIDI music production system
XO Wave: Java-based hard-disk recordingediting system
ecasound: hard-disk recording and audio processing from the Linux console or X
mixbas: "...a small realtime mixer program..." for the Linux console, from Reine Jonsson

Linux For Audio

Audio software on the Linux platform is becoming very stable and advanced. Many of the coolest projects have been in development for close to 10 years and there are several installation and setup options for new users to choose from which smooth out the rough edges and ease you through the process of getting started with Linux Audio.

You can run all your VST plugins easily and using the JACK audio server it's possible to connect multiple applications to each other to on a single PC, multiple PC's, and over a network. The ALSA and OSS drivers provide support for hundreds of consumer grade soundcards and most of the top brand multi channel professional devices too.

In short there are literally hundreds of tools, plugins, and weird and wonderful choices you can make with Linux Audio. With so much to choose from there is no excuse for not having tried it out at least once.


There are many Linux distirbutions that are aimed at providing a seemless audio and mulitmedia experience
* 64 Studio - Native free software for digital content creation on x86_64 hardware
* Agnula - A Complete Music Distribution and Operating System
* Debian Multimedia project - Aimed at Audio, video and graphic enthusiast as well as professional
* dyne:bolic - The instant Bootable Audio CD Distribution
* Planet CCRMA - Precompiled Audio Software for Redhat Systems
* Studio to Go - Everything you need to make music on a Bootable CD
* Ubuntu Studio - Aimed at Audio, video and graphic enthusiast as well as professional

FOSS and Software Expenditure in West Africa

The Free Software and Open Source Foundation for Africa (FOSSFA) is a not-for-profit Pan-African organisation whose mission is to promote the use of FOSS (Free and Open Source Software) and the FOSS model in African development and to support the integration of FOSS in national policies. Since its creation in 2003, FOSSFA has become Africa's premier non-governmental FOSS organisation in the continent.

FOSSFA, in collaboration with the Open Society Initiative for West Africa (OSIWA), has recently launched “FOSSWAY - FOSS Advocacy in West Africa and Beyond”, a 3 year advocacy project to increase the awareness and use of FOSS in the Western part of the African continent at all levels, including academia, educational institutions, the media, SMEs, and governments. The project will also advocate for consideration of FOSS issues in the formulation of policies and standards in the sub-region. It will establish researched and accurate data on ICT use, software needs and expenditure, FOSS implementation total cost of ownership, opportunity costs, expectations and emerging trends in West African countries.

FOSSFA therefore, within the framework of FOSSWAY, intends to launch a single service contract to provide the appropriate input and researched information to the advocacy project. Taking into account the evolving ICT market, the developmental context and the different economic layers of the sub-region, the study should provide an accurate ICT overview of at least 5 West African countries.

ICT 4 Education

The OpenEducationDisc focuses solely on meeting educational needs of students of all ages. Software has been chosen to address specific IT needs across a wide range of subject areas.

OpenOffice.org – Fully compatible office software for your school work
Dia – Make technical diagrams and flowcharts
Scribus – Create professional looking posters and magazines
GanttProject – Plan your school projects with this project management software
FreeMind – Collect your ideas with this mind mapping Software
PDF Creator – Make PDF documents from any program
Sumatra PDF – Read PDF files quickly and easily

Firefox – A safe, secure and fast web browser
Thunderbird – Manage your emails better than ever – Reclaim your inbox!
Pidgin – Talk to your friends whatever instant message client they use
Kompozer – Create web pages easily, without having to code
RSSOwl – Keep up with your favourite internet news feeds on your desktop

GIMP – Edit digital photos and create graphics
GIMP animation – Create animations
Inkscape – Make professional looking vector graphics
Pencil – Animate your own cartoons
Blender – 3D graphic modeling, animation, rendering and playback
Tuxpaint – Drawing program for children ages 3 to 12

VLC – Play music, videos and DVDs
Audacity – Record, edit and mix music
TuxGuitar – Compose your own music
Piano Booster – Teach yourself the piano
Avidemux – Edit movies and add special effects
Infra Recorder – Burn your own CDs and DVDs
CamStudio – Record your actions on a computer
Really Slick Screensavers – Great looking screensavers
Science and Mathematics

Nasa Worldwind - Discover the earth and other planets~
Greenfoot – Teach yourself how to program
GraphCalc – A graphical calculator
Guido Van Robot – Learn how computer programs work
CarMetal – Cool mathematical modelling tool
Maxima – University standard computer algebra system
Celestia – Explore the universe in three dimensions
Stellarium – A planetarium on your PC

FreeCiv - Control the world through diplomacy and conquest
FreeCol – Discover the ‘New World’ and build an empire
Numpty Physics – Solve puzzles using physics
TuxTyping 2 – Learn to type like a pro
Tux of Math Command – Test your mathematical skills
Winboard Chess – The classic game of chess

GTK+, 7zip, Abakt, Clamwin, HealthMonitor, Workrave
Httrack, Tight VNC, Filezilla, Azureus, WinSCP

Open ICT Access Solutions for Socio-Economic Development

Open Access in the context of Communication (Open Communication) means that anyone, on equal conditions with a transparent relation between cost and pricing, can get access to and share communication resources on one level to provide value added services on another level in a layered communication system architecture. There is currently a high momentum in the deployment of infrastructures such as optic fiber, wireless and the like. Also, the advancement in the use of ICT in general such as mobile phones, multipurpose telecentres. If used wisely, we believe these developments can facilitate provisioning of relatively inexpensive, easily accessible, diversified and expandable ICT services.

Witness the Geospatial trends shaping today's IT industry

The international conference for Free and Open Source Software for Geospatial (FOSS4G) showcases the technologies, standards, case studies and geospatial trends up-heaving today's IT industry.

The past 5 years has seen an explosion of location based applications, driven through ubiquitous mobile platforms, extensive access of no-cost maps and data, crowd sourcing, development and uptake of solid spatial standards, integration of cross-agency data through Spatial Data Infrastructures, development and commercialisation of Geospatial Open Source software, cloud computing and more. At FOSS4G you will see the best international Developers, Policy Makers, Sponsors, Decision Makers and Geospatial Professionals discuss the latest geospatial applications, standards, government programs, business processes and case studies.

FOSS4G retains many of the engaging characteristics of its Open Source heritage. With Bird of a Feather sessions, code sprints, install-fests and impromptu project meetings, there is an unparalleled opportunity to take part in active communities and provide input into the direction for a variety of projects.

It is no surprise that standards and interoperability within Spatial Data Infrastructures feature prominently in the FOSS4G conference. Standards such as those offered by the Open Geospatial Consortium and ISO provide the framework that facilitates interoperability. Interoperability through Open Standards is a corner stone of FOSS4G where you will see Open Source and proprietary software working together in harmony.

Open Source communities are renowned for their responsiveness and helpfulness, and is now being backed by companies offering enterprise support debunking the myth that there is no one you can call to support Open Source. At FOSS4G, you can talk with the enthusiastic communities in workshops, install-fests, and between sessions, and meet the companies stepping up to provide enterprise support for Open Source products.

The Open Software development process permeates a refreshing level of honesty and forthrightness. Systems integrators can google email forums, issue trackers and source code so that there are no surprises found after deploying an open source application. In the same vein, FOSS4G is famous for its project comparison shootouts, which help users select appropriate products, and developers to identify areas they need to focus on.

It will be a compelling event for anyone who needs to know how Geospatial technologies are shaping tomorrow's IT landscape.

FOSS advocacy in Africa receives a big boost from the Open Society Institute

The Free Software and Open Source Foundation for Africa (FOSSFA) has received a grant from the Open Society Initiative for West Africa (OSIWA) towards the FOSS Advocacy for West Africa (FOSSWAY) project. FOSSWAY is a one-million dollar project which is intended to entrench advocacy for free and open source software in the Western part of the African continent beginning January 2009.
FOSSWAY will advocate for FOSS and its use at all levels including academia, the media, and secondary, vocational, and technical educational institutions. The project will also advocate for consideration of FOSS issues in the formulation of policies and standards in the sub-region. The project shall not just promote, but also actively enable all participating agencies, schools, universities, standards bodies, media groups, advocates, groups and individuals to use and benefit from FOSS. Having drawn its project team from among the best of advocates, practitioners, technicians, developers, and trainers in FOSS from the region, FOSSWAY promises to push the benefits of FOSS beyond the boundaries attained so far, and increase the adoption and use of FOSS in the West Africa. FOSSWAY, in its cross-cutting nature, shall include FOSS research, hands-on training, competitions, media campaigns, on-the-ground roadshows, and prizes.
Nnenna Nwakanma, FOSSFA Council Chair thanked OSIWA for the grant, and expressed the high hopes FOSSFA has for the project, not only as a tool for policy advocacy but also as a support for business, schools and the media. Nii Amon Dsane, FOSSFA Secretariat Coordinator, believes the project will allowFOSSFA to address issues that have so far either not been covered enough or been neglected. Among these issues, he said, are the need to conduct a FOSS needs analysis for academic institutions, and study the total cost of ownership of FOSS packages.
Ben Akoh, ICT/Media Program Manager at OSIWA highlighted the integral role FOSS has in Africa's technology development. He said that if African Governments develop the sector, address capacity building challenges, define policies in support of FOSS, and make technology procurement processes more transparent, the ensuing return on investment and benefits will be felt in every development sector, including health, governance, academia and the social life.
Intending development, ICT, software and training partners interested in joining, contributing, implementing or hosting a part of the project activities are invited to contact the FOSSFA Secretariat as soon as possible. The FOSSWAY project seeks to work with national media groups, academia, training centers, governments, development organizations, research organizations, as well as national and sub-regional FOSS groups.

Info-activism is about turning information into action

10 tactics for turning information into action includes stories from more than 35 rights advocates around the world who have successfully used information and digital technologies to create positive change. This project, from Tactical Technology Collective, includes a video featuring 25 interviews with advocates alongside a deck of cards that details info-activism case studies, features tools and provides advice from people about the tactics and tools they have used in different contexts.

Stephanie Hankey, co-founder of Tactical Tech says, “The project came about when we hosted an info-activism camp in India earlier this year. The event brought together more than 100 rights advocates, technologists and designers from around the world who we knew had really interesting stories to tell about how they had turned information into action using digital technologies. We decided to document and explore people's stories throughout the camp. When we had finished we knew that what we had collected was pretty remarkable. Many of the stories highlighted ground-breaking use of the internet and digital technologies. They show what is possible for rights advocates to achieve now even with very few resources.”

The 35 info-activism stories included are from 24 different countries including Lebanon, India, Tunisia, Egypt, Kenya, Indonesia, South Africa and the UK. They include the story of Noha Atef whose blog, TortureinEgypt.net, has led to the release of illegally detailed prisoners in Egypt. Sami Gharbia, explains how activists upset the government in Tunisia when they used Google Earth and Google Maps to highlight stories of rights abuses. Dale Kongmont explains how he uses video karaoke and YouTube in Cambodia to spread word about the mistreatment and rights of sex workers in Asia. Ken Banks, the creator of FrontlineSMS, tells how this software, which allows people to send and receive bulk mobile text messages, was used for citizen reporting during this year's violent clashes in Madagascar. Dina Mehta, from India, explains what it was like to be part of an online group that worked via Twitter to get blood donors and other essential support to hospitals during the Mumbai Terror attacks.

Tanya Notley, who managed the project says, “We hope these stories can be used to inspire others. The video and cards provide the sort of in-depth background information you usually don't have access to. People have told us how much their digital activism cost, what tools they used, what skills they needed, what the local context was and they have revealed exactly what happened. All of this information can be used by other people to develop their own ideas.”

10 tactics for info-activism will be first launched December 4th in London.

The compelling economics of Linux

The Economics

Today the Linux Foundation issued a report looking at the value of the Linux platform in terms of code. This was an update of a 2002 study that estimated the value then at $1.2 Billion. Today’s value: $10.8 Billion. The study focused on the Fedora project, which has been a core part of Linux success in the server and desktop market place. Although it wasn’t specifically covered in this paper it is also worth applying the economics of Linux to one of the fastest growing segments of technology; mobile devices, consumer electronics and low cost netbooks. This is the future of Linux and the smart bets are leveraging a $10.8 billion investment to the hilt.

Linux is Everywhere

I am constantly amazed by how rare it is to work with any consumer electronics (CE) device that does *not* run on Linux. Other then two big markets — laptops and mobile phones, nearly every new consumer electronics device runs Linux. Sony televisions, Amazon Kindle, Dash automotive GPS, and nearly every other device you can imagine.

A CE company can either try to roll their own operating system, license a proprietary one like Windows or VXWorks, or use Linux. The reasons they use Linux are simple. It is easiest to hire people familiar with it. It supports more devices than any operating system in the history of the world. It is completely open, so if something doesn’t work, you can fix it yourself or pay someone to do it. There is amazingly great support available from mailing lists, or commercial support available at any service and price point. You can brand the device however you want. And it gives you a real Internet experience, with the capability to do any level of networking and application support.

One to Watch: Moblin

The final two frontiers for Linux in consumer electronics are mobile phones and laptops. I’d like to congratulate Google on shipping their first Linux-based phone this week. This is a great accomplishment, and Android should prove to be a major competitor in building a mobile phone ecosystem.

Another consumer electronics project I’m excited about is Moblin. Though initially focused on NetBooks (i.e., small laptops), I see Moblin as creating the ideal platform for a large universe of devices from MIDs to in car entertainment and more. Unlike Android, which uses Linux at the base but rewrote most of the upper level software, Moblin leverages the enormously valuable work of the entire Linux ecosystem (that $10.8 billion). But they do this while working to fix the small bugs and incompatibilities that can still cause frustrations in desktop Linux. And by working within the Linux ecosystem, the improvements they are making to a whole array of different packages and libraries will be passed back to the upstream authors, so that all Linux users can take advantage of them and adding even more value to that multi-billion dollar pie.

In a couple years, I expect Moblin to be playing the role of a standard platform for netbooks, MIDs, consumer electronics, and more. Already there is an incredible ecosystem around the platform with hundreds of ISV’s, dozens of hardware OEMs, and many Linux operating system vendors on board. Given the compelling economics of their approach I think it will be harder and harder to find devices that don’t use it in the future.

George Gilder wrote an enormously influential article in 1993 titled Into the Fibersphere. He stated:

” As industry guru Andrew Rappaport has pointed out, electronic designers now treat transistors as virtually free. Indeed, on memory chips, they cost some 400 millionths of a cent. To waste time or battery power or radio frequencies may be culpable acts, but to waste transistors is the essence of thrift. Today you use millions of them slightly to enhance your TV picture or to play a game of solitaire or to fax Doonsbury to Grandma. If you do not use transistors in your cars, your offices, your telephone systems, your design centers, your factories, your farm gear, or your missiles, you go out of business. If you don’t waste transistors, your cost structure will cripple you. Your product will be either too expensive, too slow, too late, or too low in quality.”

The same is becoming true with Linux, and for one of the fastest growing segments of computing, the project to watch is Moblin.

Linux to Ship on More Desktops than Windows

For those that decry the constant prediction of the “year of the Linux desktop” I am happy to say that next year Linux may actually ship on more desktops than Windows or the Mac. That is right, I said next year. What is driving this? Two words: fast boot.

Matt Richtell of the New York Times wrote a great article on Sunday about the demand for faster start up times on computers. In the story the chronicled how HP, Dell, Lenovo, Asus and a array of other PC makers are starting to develop “machines that give people access to basic functions like e-mail and a Web browser in 30 seconds or less.” Here is the interesting part: Linux is providing that access.

Ashlee Vance, also of the New York Times, did a great follow up piece on the story chronicling just how prolific this trend is becoming. He states, “Over the next few months, the instant-on technology should become mainstream. Here’s a look at what’s available and what’s coming in the instant-on market.”

The evidence is overwhelming:

“ - DeviceVM – This Silicon Valley start-up has emerged as the leading independent maker of instant-on software. H.P., Lenovo and Asus use modified versions of DeviceVM’s Splashtop software. In all cases, they provide quick access to a Web browser, instant messaging software, photos and voice over Internet protocol software. The large PC makers tend to ship Splashtop on laptops aimed at consumers.
- H.P. – Today, you can buy HP’s Envy laptop with the Instant On Solution software, which is Splashtop in disguise. In the coming months, H.P. plans to ship it on an undisclosed number of systems.
- Dell – In an unusual move, Dell has done a lot of customization work with its instant-on tools. The company plans to ship something called Latitude On with a pair of laptops. This Dell-made software will permit access to e-mail and the other basic functions. The software will actually run on a separate ARM processor, often found in mobile phones, rather than a standard Intel or Advanced Micro Devices chip.
- Lenovo – By early next year, Lenovo will ship a version of Splashtop on some of its consumer laptops.
- Phoenix Technologies – This software maker has been working on a downloadable software package called HyperSpace. It will let you start a Linux-based system early, while Windows boots in the background. People can then switch back and forth between both sets of software as they desire. It should be widely available in January with Phoenix charging a monthly subscription fee to the software.”

What does this mean for Linux? First it means that Linux is more central to the user experience. As the New York Times points out, this is “Microsoft potentially losing the user experience.” Linux is not only powering fastboot applications, but the Moblin project has already demonstrated a five second boot at the Linux Foundation’s recent Plumbers conference.

We may see a world at the end of next year where Linux ships on almost every notebook computer regardless of whether it is loaded with Windows. This in addition to the huge potential of the netbook, mobile internet device and mobile Linux market can mean huge and immediate inroads for a Linux desktop, albeit not in the form most people had predicted many years ago when the first “year of the Linux desktop” was declared.

Intel and Taiwan Inc. Invest in Open Source Research Center

Intel announced today (Thursday) its plans to partner with the Taiwanese government and invest in the island nation’s IT industry to launch an Open Source Software Development Center for mobile devices. Building on Taiwan’s undisputed role as a leading center for creating connected consumer devices, CEO Paul Otellini indicated that Intel had signed an agreement with the Taiwan Ministry of Economic Affairs (MOEA). MOEA and Intel will establish a center for enabling Moblin and other OSS optimized for devices based on the Intel Atom. At the same time, Intel Capital will invest NT$386M (US$11.5M) in Taiwanese carrier VMAX to support deployment of Taiwan’s first mobile WiMax network in the first half of 2009.

This move by Intel is good for everyone: good for Intel, who is working with a large ecosystem with its recently-launched mobile/embedded Atom architecture CPUs. It’s good for Taiwanese OEMs, who already have launched Atom-based devices, but who crave availability of a richer Linux-based software stack and more opportunity for localization and local value-added software. It’s good for Taiwanese end-users, who will enjoy high-bandwidth wireless internet access, with new options for data and streaming media. And it’s good for the “rest of us”, since Taiwan-local rollouts of new concepts and products pave the way for cost-down, high-volume versions of the same technologies and devices in short order around the world.

For both fans and critics of the MID concept and form factor, this double-whammy announcement means that the MID is here to stay. Industry analysts project Atom-based MIDs will enjoy worldwide shipment of 86M+ units by 2013.

Giving the nascent MID device class firehose-level bandwidth, together with a desktop and server-compatible CPU running an open source stack, opens this converged platform to a wealth of new possibilities. Combining lower-powered Atom with Linux-based Moblin and high speed WiMAX lends solid credibility to Intel’s vision for MIDs - one that fuses long-lived, well-provisioned, connected mobile devices with always on, always available multimedia and social networking.

The best part is that this is on a completely open source stack based on mainstream Linux technology. The more successful these efforts are the stronger Linux will become in other categories of desktop computing. It seems Intel has really gotten the concept of how to work with the community and further their business goals. I suspect many of their industry counterparts are taking note.

Linux Continues to Define the Future of Computing While Microsoft Follows

It is hard for the executive director of the Linux Foundation to feel bad for Microsoft, but they are having a bad week while Linux continues to move forward in innovative ways into new markets for computing. Let’s take a look at the difference between Microsoft and Linux this week:

Monday: Microsoft starts its week with a front page story in the Wall St. Journal titled, “Microsoft Battles Low-Cost Rival for Africa.” In the article Microsoft is documented engaging in questionable practices against a Linux competitor that is springing up across Africa not because of any corporate conspiracy, but because it is free and open.

Tuesday: Microsoft reveals “Windows 7” which is widely regarded as an attempt to right the wrong that is Vista. Headlines were brutal: Infoworld: “Windows 7: The ‘dog food’ tastes bad”, Dallas News: “Microsoft previews Windows 7, and it looks like… Vista”, Computerworld: “Is Windows 7’s new UAC just lipstick on a pig?” and “Windows 7, Office 14 to create bigger lame ducks than George W. Bush.”

Tuesday: Microsoft also announced its cloud computing platform summed up best at ZDNet: “Microsoft’s Azure cloud platform: A guide for the perplexed.” No licensing, pricing and due date information. This for something that Amazon has offered with a Linux based solution for over a year on the EC2 Cloud.

We aren’t even half way through the week yet and Microsoft is either getting battered or following technical trends already blazed by Linux. In contrast, Linux is having a great week.

Monday: The New York Times shows how Linux may actually ship on more desktops next year that Windows, albeit in an unconventional way with instant on boot. “Instant-on machines represent a new opportunity for the open-source Linux operating system, which can compete with Windows.”

Wednesday: HP reveals it is rolling out a Linux based Notebook computer with Linux. Their HP Mini 1000 with MIE (Mobile Internet Experience) a Linux based OS will ship with a $379 price point. They are following moves by Dell, Asus, Lenovo, and others to ship low price Linux PC’s. It is also worth noting that Microsoft had to extend the life of Windows XP in order to even compete in this market.

Thursday: Intel and Taiwan announces they are teaming up on mobile Linux development lab. The lab will work on creating Moblin based devices in one of the most promising categories of computing.

Linux on more laptops than Windows? Dell, HP, Asus, Lenovo and others shipping Linux desktops at unheard of prices? Microsoft stuck in a rut needing to follow rather than lead? And I only hit on a few things going on in Linux this week. As we reach the end of 2008, 2009 is shaping up to be a pretty good year for Linux.

Linux Provides Steady IT Foundation for Banks in a Tough Economic Climate

Times are tough in the banking industry. According to the AP, 100,000 bank employees have been laid off over the past two years. Overall, banking industry unemployment has almost tripled and bank stocks have cratered. Even with astronomical bailout money becoming available, banks are looking for ways to consolidate.

Consolidation can be both forward and reverse. The seemingly more positive, “forward consolidation,” is when a bank buys another, gains market share and “market efficiencies.” It’s not all positive as layoffs are a part of this scenario. Consolidating “in reverse” is generally more painful, though, selling off assets, looking make do with less, and, invariably, cutting headcount.

It’s not often the first thing you think about, but technology systems are impacted heavily. New users, different bosses, different business processes. It creates upheaval in IT infrastructure and can leave banks vulnerable.

In this environment, Linux provides a distinct competitive advantage. Linux has zero licensing fees, so pure cost is a key benefit. Linux support can be found at almost any level; from free e-mail and bulletin boards to 24/7 mission critical support via enterprise subscriptions. Banks that are running Linux have an operating system with support for the greatest number of chip architectures, hardware platforms and forms of computing (blades to mainframe). Simply put, Linux is the best common denominator in diverse IT environments. It’s not just the operating system. Coding and porting customized applications, common in banking, is significantly easier on open platforms.

IT departments won’t benefit much from bailout money. They need to make good technology decisions. As I speak to leaders in IT departments at banks lately the anecdotal evidence shows Linux is a key technology component in any consolidation plan.

Dell Introduces a Full Linux Notebook for $299.00

Dell introduced the Inspiron N notebook computer this week for $299.00. This is a full fledged notebook computer with a 15 inch screen, a dvd burner, 160 gig hard drive and more for $299.00. This is breakthrough pricing in a market that can’t be re categorized by Microsoft as a “low-cost small notebook PC” It is hard to see how Microsoft can maintain their usual margins which would represent 1/3 of the cost of this PC. Linux’s fundamental pricing advantage here could not be more compelling.

Linux the Clear Winner in Google OS

Most of you have seen the news today from Google formally announcing their Chrome Operating System for netbooks using Intel x86 and ARM chips. The is painted as a classic “clash of the titans” between Google and Microsoft, with Google finally directly assaulting Microsoft’s top cash business. (They have already opened the war against Microsoft’s other cash cow, Office, with Google Docs.) While this is a great story, I prefer to frame at as David vs Goliath with the little OS that could, Linux, as the foundation of this announcement, as well as the other operating systems challenging Windows.

What does this announcement mean to the computing industry?

Microsoft’s pricing model is not sustainable in the new world of PC/mobile convergence. MSFT as it existed for the past 20 years does not fit into a world of free carrier-backed netbooks and an internet OS. It’s been reported that Windows 7 Starter will be priced around $45 - $55. In a $200 netbook with already razor thin margins that pricing doesn’t work. And it certainly doesn’t work in the world of free PCs subsidized through carrier subscriptions. When PC makers threaten to use another operating system if they don’t get Windows 7 at a lower price they will not be bluffing; Google Chrome, Moblin, and desktop Linux will be free. Microsoft is not blind to this - but it is questionable if their recent moves towards services will happen soon enough.

The new PC model is built around services: Google ads, online music/video/TV services, subscriptions to applications built and run from the cloud. The old world of high margin operating systems and desktop applications is simply not very relevant to this new world. Native applications unique to an OS are just also not very relevant any more. Even such workhorses as personal finance and digital photo applications have moved to the browser, and those apps are available on any OS. Even Microsoft shut down their Microsoft Money product which was built under the old software sales model. Google wants to capitalize on this trend with Google Chrome OS and its own bevy of online services.

Linux (and consumers) are the true winners. Linux is the basis for not only the new Chrome OS but also the other challengers to Microsoft’s desktop monopoly such as Moblin, Nokia’s Maemo, Palm Pre, many versions of desktop Linux such as Ubuntu or Suse, Android and more. (It’s also the basis of all of Google’s application services as well as every major cloud offering.) Linux is the foundation for this new wave of computing because it is available on more architectures and supports more devices than any other OS. (By using the Linux kernel Google Chrome gains the advantage of all of the hardware drivers.) Linux also gives PC makers and mobile carriers the flexibility to use it without onerous pricing and branding restrictions. The more companies and manufacturers base their products on Linux, the stronger Linux becomes. Say goodbye to monopoly pricing.

There are more questions raised by this announcement than answers, but I feel the three points above are clearly strengthened by this news. We look forward to seeing Google collaborate closely with the Linux community and industry to enhance Linux as the foundation for this new computing model.

Open Source Holds an Intervention

Sometimes you need to hit rock bottom before you can get the help you need. IDC acted as an “interventionist” today publishing a new report showing how open source is growing in the down economy.

The study released today shows, “worldwide revenue from open source software will grow at a 22.4% compound annual growth rate (CAGR) to reach $8.1 billion by 2013. This forecast is considerably higher than 2008 for three reasons: (1) the bottom-up list used to calculate the revenue has expanded through an exhaustive effort to include more projects in this forecast; (2) open source software has had a much higher level of acceptance over the past 12 months than previously expected, and; (3) the economy accelerated the uptake and use of open source software in the closing months of 2008.”

Economic crisis tend to clarify people’s thinking and accentuate existing trends in the market place. This is no exception. The IDC report underscores the fact that open source provides real value for the money and it took a recession for people to figure that out. For a world addicted to high priced proprietary software this may have been the bottom that will transition the enterprise IT industry to one of shared innovation, true value for the money, and higher levels of service.

Protecting Linux from Microsoft

Earlier this week, the Wall Street Journal’s Nick Wingfield broke a story on Microsoft selling a group of patents to a third party. The end result of this story is good for Linux, even though it doesn’t placate fears of ongoing attacks by Microsoft. Open Invention Network, working with its members and the Linux Foundation, pulled off a coup, managing to acquire some of the very patents that seem to have been at the heart of recent Microsoft FUD campaigns against Linux. Break out your white hats: the good guys won.

The details are that Microsoft assembled a package of patents “relating to open source” and put them up for sale to patent trolls. Microsoft thought they were selling them to AST, a group that buys patents, offers licenses to its members, and then resells the patents. AST calls this their “catch and release” policy. Microsoft would certainly have known that the likely buyer when AST resold their patents in a few months would be a patent troll that would use the patents to attack non-member Linux companies. Thus, by selling patents that target Linux, Microsoft could help generate fear, uncertainty, and doubt about Linux, without needing to attack the Linux community directly in their own name.

This deal shows the mechanisms the Linux industry has constructed to defend Linux are working, even though the outcome also shows Microsoft to continue to act antagonistically to its customers.

We can be thankful that these patents didn’t fall into the hands of a patent troll who has no customers and thus cares not about customer or public backlash. Luckily the defenses put in place by the Linux industry show that collaboration can result in great things, including the legal protection of Linux.

The reality is that Windows and Linux will both remain critical parts of the world’s computing infrastructure for years to come. Nearly 100% of Fortune 500 companies support deployments of both Windows and Linux. Those customers, who have the ear of Microsoft CEO Steve Ballmer, need to tell Microsoft that they do not want Microsoft’s patent tricks to interfere with their production infrastructure. It’s time for Microsoft to stop secretly attacking Linux while publicly claiming to want interoperability. Let’s hope that Microsoft decides going forward to actually try to win in the marketplace, rather than
continuing to distract and annoy us with their tricky patent schemes. And, let’s offer a big round of applause to Keith Bergelt and OIN, for their perfectly executed defense of the Linux community.
 

. . . Social Networks . . .

Usage Policies