Tuesday, 21 October 2008
The two top runners are currently Fedora 10 (beta) and Ubuntu 8.10 (beta).
First a warning, Beta software is not for everybody, you can end up with a dead machine if you hit an issue with an update, and have to know how to recover your machine when that happens. You also have to have a good set of backups.
There is a superb tool called remastersys that creates boot able backups of not just your data, but your entire operating system, which I use in conjunction with an external FAT formatted USB drive.
One thing I have found is that trying to find a distro that supports the very recent hardware and chipsets found inside the Aspire One is hard, and I have narrowed my evaluations to those distros shipping the brand new 2.6.27 kernel or later, as that is the only one that seems to support the chipsets out of the box. I also need to have networkmanager 0.7, to support my Mobile USB broadband modem (a Huawei 169G).
I was using Ubuntu at first but needed to change to a RPM/YUM based system due to us using Centos 5 everywhere at work, and since I am building code that needs to be run and installed in that enviroment, the differences in the package managers was just too great for me to be comfortable with. (Yes I write software on my Aspire netbook, its quite capable of it, I was very surprised at how well Eclipse performs on this platform, almost as fast as my 2008 macbook).
Installation of both distros was relatively easy, I used unetbootin to create a bootable 1G USB thumbdrive directly from the distributed ISO's and booted from that, I did not notice any issues with installation of either distro. Out of the box Fedora was slightly better in this respect, with a few caveats. Choose the gnome distros for each, as I have found that the newer KDE setups are somewhat less functional, in particular current KDE incarnations (4.1+) seem to have issues with saving settings.
Ubunto produced the more complete setup here, but only after I deleted the /etc/X11/xorg.conf file and allowed the new xorg 7.4 system to work its magic with configless boots.
glxgears turned in a performance of about 350 fps, which is fast enough to enable compiz desktop effects, however since that is just eye-candy its debatable whether it is worth enabling it on a device of this class. A minor irritation was that in order to support multiple screens, the screen resolution app has to create an xorg.conf with a virtual screen that encompasses both physical screens, and it initially gets it wrong which means that the external monitor resolution is low until you hand edit it to up the required size, I just doubled the dimensions in both directions and rebooted and was then able to select the 1280x1024 res I was looking for on my monitor.
Fedora was a mixed bag, it again needed the delete my xorg.conf trick, but try as I might I could not get it to properly support an external monitor at a reasonable resolution. However the performance of the driver in glxgears is significantly better, getting 550-600 fps. Im still trying to determine the reasons for this performance difference.
Due to the inclusion of networkmanager 0.7 in both distros, wireless was a doddle, both wifi and usb modem worked out of the box, however both distros suffered from the same issues re DHCP and wifi performance.
Every now and then they refused to acquire either a wired or wireless ip address due to dhcp timeouts, rebooting the machine seemed to clear the problem. The wifi is using the new ath5k driver for the Atheros chipset in both cases, and I have found that this driver seems to effect the sensitivity of of the wifi, with far lower signal strengths than under the older madwifi driver, and frequent dropouts and stalls.
Also under the madwifi driver there where a set of sysctls that would enable the wifi led which dont work on the ath5k driver, and I have not found any substitutes. The driver binds to the led_class module, and looking at the source has functions for enabling/disabling this mode, but I cant find any documentation on how to enable it.
Ubuntu has working suspend and resume, the sound sometimes does not restore properly coming out of sleep of hibernate but that is a minor annoyance, on Fedora both modes where a bust, resulting in a locked up machine requiring a hard reset.
I must admit my needs are probably different from average, I need to enable a full local LAMP stack and software development tools (yes it runs fine, and no the machine is not slow after doing so). The Aspire has dual 1.6Ghz cores, up to 1.5G of ram and a 120G HDD so its quite capable of handling this load. It should be noted that the spec of the machine is amost identical to the perfomance of an instance running on the Amazon EC2 cluster (1.6Ghz, 1.6G Ram, 160G HDD).
With Ubuntu setting up the stack was hard, on fedora this was a Breeze, Fedora even has a full Eclipse 3.4.x install in the repository, and has installable packages for eclipse PDT, subclipse and Xdebug. So my usual fight to get a working PHP dev enviroment working was eliminated.
I was able to setup up the entire machine for running our web app, including checking out the code from our subversion repository in under 30 mins, vs the 3-4 hour battle that I had with ubuntu. The Fedora packages even setup the correct SVN provider interfaces for subclipse which really impressed me.
I had some trouble setting up netbeans on Fedora, but mainly because it could not find the JRE directory, once that was sorted out, it installed fine. I tend to use Eclipse for PHP development, and Netbeans for C++ development as I have never really got on with the Eclipse CDT.
Ubuntu is defiantly the more polished distro for general use, my specialised needs tend to lean me towards Fedora where im willing to put up with the shortcomings, I also like the faster more responsive feel to the fedora distro.
One final tip, if you are playing with beta software and hit issues, then engage with the community around the distro, they are normaly very responsive, and make sure that any problems you find are submitted as bug tickets, or they will never get fixed. Dont just sit back and wait for somebody else to report the problems...
Tuesday, 14 October 2008
I wanted the machine to run freemind so could take notes at FOWA which was last week.
Then suddenly disaster struck, I powered it up the week-end before the show only to be greeted with a blank screen and no activity at all. The machine was dead as a dodo.
So I packed it all back into the box and took it5 back to John Lewises in Reading where I had got it from originaly. And to give those guys their due, they where fantastic, they did not quibble, and swapped the machine out for a new one immediately without any hassle, at least one organisation knows about customer service (note they also include an extended 2 year warrentee for free with all items). I will be3 buying all my electronics from those guys in the future.
Anyway back to the story, I spent a frantic week-end reloading all my software and backups (yes I had them) onto the new machine, and headed off to the show. The machine performed fantastically, even managing to handle the wifi connections in the hall, where my colleagues EEE could not cut the mustard.
Then this evening on the train back home tonight, lightening struck twice. I powered down the AAO, realized I had not copied something I want off it onto my pen drive, and went to power it back up again, only to discover the machine had converted itself into a plastic brick again, totaly unresponsive to any prodding, engineer's taps or other incantations.
Dispondant at the thought of having to return it to JL with an explanation that "honest gov, it just broke again" . And negotiate the disdainful looks and insistence that I "must have done something to it", after all it is the second time... I resigned myself to having be without my AAO whilst JL investigated what abuse I had heaped on to the little beastie (again). All whilst feeling like a child molester, an abuser of young innocent netbooks.
However it turns out that this is a known problem, and the AAO even has a built in mechanism for fixing the problem, even if it is lying on its back with its metaphorical legs in the air. An off chance search of the net, looking for other lost souls with terminal aspire syndrome, hoping to find solace in the company of other unfortunates, and a chance to appeal my innocence to a more receptive group sharing this traumatic experience, turned up a post that offered a last chance hope of salvation.
Festooned with dire warnings about, following every step to the letter, and the dire consequences of not doing so, lay a page that made me once again aspire to get my aspire motoring again.
So The Aspire MAY drop its flashed bios occasionally, forcing it to emulate the common house brick, but it has a hardwired loader that will pull a copy of the bios off a usb pendrive and restore it to its former glory, even if the machine is exhibiting no other outward signs of life. The gory details can be found at at Macles Blog. Suffice to say i followed the recipe to the letter, waved the incantations in the air, mumbled the words of power, and breathed life back into my portable building material.
Waiting for the process to finish, and for the machine to restart has been the longest two minutes of my life, but to see the machine spring miraculous back to life, like Lazarus rising from the dead was a thrill worth raving about.
Wednesday, 17 September 2008
Monday, 11 August 2008
Saturday, 12 July 2008
Friday, 11 July 2008
Sunday, 22 June 2008
So here i am again at Alexandra Palace, at the BBC/Microsoft mashup 08 event, the 48 hour homage to all things techy and geeky. There is a certain sense of Deja Vue, having been here before at a simular event in 2007. However this time the flavor is different, the presentation a little more polished.
The guys from ARM, converted their table into a makeshift electronics workshop, and slaved away all night to create a standalone system for displaying location sensitive webpages.
Finaly I ran into an old friend Toby who I had not seen for some time.
Monday, 2 June 2008
I gave my talk at barcamp yesterday about scalability, startups and using the cloud to completely operate a new company, which seemed to go down well. Whilst running around networking, and having lots of fun meeting up with old and new friends, I also managed to put our new development and staging environment for bejant live (thank god the wifi got fixed :-) ). And later this week we will be shifting all the final pieces of the organisation into the cloud. So we are practising what we preach.
Saturday, 31 May 2008
Sunday, 25 May 2008
Saturday, 24 May 2008
Friday, 23 May 2008
Wednesday, 21 May 2008
Saturday, 17 May 2008
- PHP 5.2 based
- Apache 2.2
- Centos 5.0
- Swish-e Indexer (for search).
- Video Distribution and conversion. (ffmpeg).
- Amazon EC2 - Elastic Computing Cloud
- Amazon S3 - Flexible Storage
- Amazon SQS - Message Queueing.
- 2 Front-end servers
- 2 Database servers
- 1 Test/QA server
- 1 Developer server
- 1 Video Processing Server
- 1 Utility server (ad server, mail-list manager, Feed processing pipeline).
I looked at a few other management tools such as Scalr and EC2PHP both of which did not provide enough capabilities to reasonably manage the cluster. It is indeed possible to roll your own, but we felt that RightScale gave us an edge and made creating this complex system setup easier and more maintainable.
RightScale provides the following:
- Replicated database solution
- Load balancing front ends
- Monitoring and alerting
- Multi-server clusters
- Log file consolidation
- Automated system administration
- Dynamic server configuration
We decided that we wanted to create an environment that supports the full lifecycle of the Bejant.com development activity, which is predominantly SCRUMM based, to that end we wanted a production pipeline that moves releases from Development to Test to Live in an organised fashion. Bejant's sprint cycles operate on a approximately two week time line, during which a number of major and minor feature enhancements are introduced, alongside the usual maintenance and bugfixing activities that are normal for any development team. The Reason for the seperate Test enviroment is to isolate the QA folk from the day to day change that occures on a development system, and allow them to operate thier own database with known test accounts and data.
The challenge here is to make sure that the codebase and database schema are aligned on each stage of the pipeline, with a site such as Bejant that is undergoing rapid development, these elements are often quite different in each stage as new features are added and roled through to production.
To that end we decided that the system would effectively boot each stage from a subversion repository, which would hold branches that reflect the stages in the pipeline.
- The dev instances always boot from the trunk, and reflect the current state of the codebase.
- The Test instances boot from trunk, but are set to a particular revision that is deemed to be "in test", the test engineers can chose which revision to boot an instance from.
- The live system boots from a branch which represents a released product.
In my next post we will look at some of the basics of AWS and the facilities it provides.
Saturday, 5 April 2008
However there are a few flies in the ointment, that always draw me back.
Single Sign On....
All of these packages usually ship with their own embedded user and profile management system, trying to "integrate" multiple systems together can be a nightmare. You either end up with complex and fragile "bridges" that usually involve maintaining account mapping tables, and copying user data backwards and forwards between disparate systems. Or your end up having to hack a core feature out of the system and replace it with your own interface to the user authentication and authorization system that is shared across the whole site.
What we need is a mechanism for abstracting users and profiles that can be plugged in across multiple systems. Ok, so some will say, well thats what directory solutions like LDAP etc are for, but they only solve part of the solution, putting all the data in one place, they dont help with maintaining a consistent session maintenance across disparate products.
A component SSO solution should handle all activities related to user data, signin, signup, signout, password reminder, alerting (email, im, sms), user to user communications, profile/bio maintenance and visibility controls. And preferably cope with international requirements such as regionally variable data protection rules, multisite replication etc.
One possible solution I have been examining is a new social networking core called Ringside (http://www.ringsidenetworks.com) which handles all of the above, and adds many other components such as groups, contact lists, friends networks etc too. It even supports running of facebook apps inside the core, and has a road map that will encompass OpenID and OpenSocial. It would make an ideal core platform for attaching other applications to as modular components. Its nicely architected, having separated front-end and back-end processes.
If a suitable API or standard can be evolved that gives open source package writers the option to avoid all that user management code and just plug in to a common set of apis, some very very cools setups can be evolved.
Almost every system i have examined uses almost completely different mechanisms for handling presentation, some use templates, some use CSS, some are table ridden nightmares, but they are all wildly different, and most are single tier applications where the back-end logic and the front-end page generation are tightly integrated together. In this day and age there is no excuse for this, if you are an author planning an open source product (or any web based product) consider true separation between your front-end and back-end systems, use a Restful interface to bind the two together, and we systems integrators will love you for it, we can chose to take your back-end engine and integrate it directly into any front-end we are using to wrap the web service.
If the Open Source movement could embrace the two principles above, you would see a lot more adoption in business and enterprises.
Thursday, 20 March 2008
Updating is simple, so long as you remember one little "gotcha".
$sudo gem update --system
$sudo gem update
Now here is the gotcha. Rails 2.0.X now has an new element, active resource, which is not upgrade by the standard upgrade path.
If you get the following error then its likely that you hav'nt updated rails properly.
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rubygems.rb:379:in `report_activate_error': Could not find RubyGem activeresource (= 126.96.36.19953) (Gem::LoadError)
so the solution is to either...
$sudo gem install rails
$sudo gem install activeresource
You dont need to use --include-dependancies anymore, its the default on the current version of rubygems.
Note also that the standard rails distribution on Leopard does not include a mysql driver, i will produce a post on how to upgrade this later.
Tuesday, 18 March 2008
$ sudo mv ~/downloads/grails-1.0.1 /usr/share
$sudo chown -R root:wheel grails-1.0.1/
$sudo chmod 0755 grails-1.0.1/bin/*
Now create a symlink to access the current version of grails by, if you download and install another version later, you can just move the symlink to point at it, this is a good practice as it prevents you having to overwrite your old installation.
$ln -s grails-1.0.1 grails
Finaly add the following to your /etc/profile or ~/.profile depending on wether you want it available for all logons or just your own.
GRAILS_HOME=/usr/share/grails; export GRAILS_HOME
PATH = $GRAILS_HOME/bin:$PATH; export PATH
Welcome to Grails 1.0.1 - http://grails.org/
Licensed under Apache Standard License 2.0
Grails home is set to: /usr/share/grails
No script name specified. Use 'grails help' for more info
$ sudo mv ~/downloads/groovy-1.5.4 /usr/share
$cd /usr/shareNow create a symlink to access the current version of groovy by, if you download and install another version later, you can just move the symlink to point at it, this is a good practice as it prevents you having to overwrite your old installation.
$sudo chown -R root:wheel groovy-1.5.4/
$sudo chmod 0755 groovy-1.5.4/bin/*
$sudo ln -s groovy-1.5.4 groovyFinaly add the following to your /etc/profile or ~/.profile depending on wether you want it available for all logons or just your own.
GROOVY_HOME=/usr/share/groovy; export GROOVY_HOME
PATH=$GROOVY_HOME/bin:$PATH; export PATH
Monday, 17 March 2008
However its is a LOT simpler than that. Just install XCode 3.0 TOOLS, from the optional installs section of your Leopard distribution disk and it will install and setup
/usr/share/ant -> ant-1.7.0
/usr/share/maven -> maven-2.0.6
/usr/share/junit -> junit-4.1
and it will throw cvs, subversion and a multitude of other tools in for good measure. Unfortunately Xcode 3 wont run on Tiger (10.4.x), where you are stuck with Xcode 2.5, which may not setup all this goodness, but since i dont have a Tiger Machine anymore, i cant verify this.
So: to start the investigation of the java frameworks/platforms listed above, I just added the following to the end of my /etc/profile, to make sure the packages can find the installed goodies.
JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.5/Home; export JAVA_HOME
ANT_HOME=/usr/share/ant; export ANT_HOME
MAVEN_HOME=/usr/share/maven; export MAVEN_HOME
JUNIT_HOME=/usr/share/junit; export JUNIT_HOME
Saturday, 8 March 2008
This is a three fold learning exercise, first learning ruby the language, then rails, then how to run and deploy rails on Mac OS X, my operating system of choice.
One of the things i have learnt so far, is that unlike compiled languages, interpreted languages can tie you up in knots, trying to work out why something that seems to run does not do what you want it to do.
I guess the next thing i will have to learn is how to get the ruby debugger running, or i'm going to be even more prematurely bald.
So im running:
Mac OS X leopard on a MacBook, with the Apple ruby. mysqly.ab sourced MySQL 5.0.51, Aptana Studio as my IDE and Mongrel as the web server.
Getting that lot to co-exist was adventure in its own right (and may be the subject of another post), but now i have it working its a pretty slick environment, i reckon in the last 10 hours I have done as much work as would have taken me a week to get running in PHP or C++.
So off to find out how to make debugging work in the IDE, i managed it once, but only after considerable fiddling.
BTW: the new MacBooks are wicked developer workstations, the Black MacBook came with 250G Drive and 2G ram, and is as fast as hell, faster than my 3 gen MacBook Pro that i recently gave up.
Now wheres that debugger documentation :-)
Wednesday, 27 February 2008
However one word of caution, marc's installation process assumes that you have not yet enabled the on-board php implementation, if you have then you need to comment out the LoadExtension line you carefully uncommented to enable it, and trust to marc's magic installation of additional included conf files.
Oh, and another notable is that php.ini moves from /etc to /usr/local/php5/etc
More details of beta 6 can be found on this thread. http://www.entropy.ch/phpbb2/viewtopic.php?t=2945&start=0&postdays=0&postorder=asc&highlight=
Tuesday, 12 February 2008
Wednesday, 30 January 2008
UPDATE: So here we are 8 years after this original post, and the capability i was looking for has finaly made an appearance, though not quite in the way i expected. Checkout the ShadowDom initiative which provides the kind of presentation sandboxing i was seeking. See below for more information.
Thursday, 24 January 2008
Friday, 11 January 2008
I first noticed this when i was trying to install leopard (10.5) on these machines, Leopard weighs in at 7.8G with all options installed, and comes on a DL layer DVD. And was completely unreadable by both intel machines.
The solution turned out to be simple. Using a small external USB drive formated with a GUID partitions (this important people, pay attention, two types of partition, GUID or Apple Partition Map, use the former or it wont boot), I used Disk Utility to copy the DVD to the USB drive using one of the G4 machines by "restoring" it to the volume.
Then it was just a matter of plugging the drive in to the intel macs, and running OS X Install to get leopard installed on the fragile machines.
So whilst i recognize that I'm problaly just the victim of bad luck, but i really do wonder if Macs are as reliable now we have moved to the Intel world.