I have been trying out several linux distributions on my Aspire One to find one that suites the machine best, I have tried everything, including a copy of PCBSD that was well "less than successful".
The two top runners are currently Fedora 10 (beta) and Ubuntu 8.10 (beta).
First a warning, Beta software is not for everybody, you can end up with a dead machine if you hit an issue with an update, and have to know how to recover your machine when that happens. You also have to have a good set of backups.
There is a superb tool called remastersys that creates boot able backups of not just your data, but your entire operating system, which I use in conjunction with an external FAT formatted USB drive.
One thing I have found is that trying to find a distro that supports the very recent hardware and chipsets found inside the Aspire One is hard, and I have narrowed my evaluations to those distros shipping the brand new 2.6.27 kernel or later, as that is the only one that seems to support the chipsets out of the box. I also need to have networkmanager 0.7, to support my Mobile USB broadband modem (a Huawei 169G).
I was using Ubuntu at first but needed to change to a RPM/YUM based system due to us using Centos 5 everywhere at work, and since I am building code that needs to be run and installed in that enviroment, the differences in the package managers was just too great for me to be comfortable with. (Yes I write software on my Aspire netbook, its quite capable of it, I was very surprised at how well Eclipse performs on this platform, almost as fast as my 2008 macbook).
Basics:
Installation of both distros was relatively easy, I used unetbootin to create a bootable 1G USB thumbdrive directly from the distributed ISO's and booted from that, I did not notice any issues with installation of either distro. Out of the box Fedora was slightly better in this respect, with a few caveats. Choose the gnome distros for each, as I have found that the newer KDE setups are somewhat less functional, in particular current KDE incarnations (4.1+) seem to have issues with saving settings.
Video:
Ubunto produced the more complete setup here, but only after I deleted the /etc/X11/xorg.conf file and allowed the new xorg 7.4 system to work its magic with configless boots.
glxgears turned in a performance of about 350 fps, which is fast enough to enable compiz desktop effects, however since that is just eye-candy its debatable whether it is worth enabling it on a device of this class. A minor irritation was that in order to support multiple screens, the screen resolution app has to create an xorg.conf with a virtual screen that encompasses both physical screens, and it initially gets it wrong which means that the external monitor resolution is low until you hand edit it to up the required size, I just doubled the dimensions in both directions and rebooted and was then able to select the 1280x1024 res I was looking for on my monitor.
Fedora was a mixed bag, it again needed the delete my xorg.conf trick, but try as I might I could not get it to properly support an external monitor at a reasonable resolution. However the performance of the driver in glxgears is significantly better, getting 550-600 fps. Im still trying to determine the reasons for this performance difference.
Wireless:
Due to the inclusion of networkmanager 0.7 in both distros, wireless was a doddle, both wifi and usb modem worked out of the box, however both distros suffered from the same issues re DHCP and wifi performance.
Every now and then they refused to acquire either a wired or wireless ip address due to dhcp timeouts, rebooting the machine seemed to clear the problem. The wifi is using the new ath5k driver for the Atheros chipset in both cases, and I have found that this driver seems to effect the sensitivity of of the wifi, with far lower signal strengths than under the older madwifi driver, and frequent dropouts and stalls.
Also under the madwifi driver there where a set of sysctls that would enable the wifi led which dont work on the ath5k driver, and I have not found any substitutes. The driver binds to the led_class module, and looking at the source has functions for enabling/disabling this mode, but I cant find any documentation on how to enable it.
laptop use:
Ubuntu has working suspend and resume, the sound sometimes does not restore properly coming out of sleep of hibernate but that is a minor annoyance, on Fedora both modes where a bust, resulting in a locked up machine requiring a hard reset.
Software:
I must admit my needs are probably different from average, I need to enable a full local LAMP stack and software development tools (yes it runs fine, and no the machine is not slow after doing so). The Aspire has dual 1.6Ghz cores, up to 1.5G of ram and a 120G HDD so its quite capable of handling this load. It should be noted that the spec of the machine is amost identical to the perfomance of an instance running on the Amazon EC2 cluster (1.6Ghz, 1.6G Ram, 160G HDD).
With Ubuntu setting up the stack was hard, on fedora this was a Breeze, Fedora even has a full Eclipse 3.4.x install in the repository, and has installable packages for eclipse PDT, subclipse and Xdebug. So my usual fight to get a working PHP dev enviroment working was eliminated.
I was able to setup up the entire machine for running our web app, including checking out the code from our subversion repository in under 30 mins, vs the 3-4 hour battle that I had with ubuntu. The Fedora packages even setup the correct SVN provider interfaces for subclipse which really impressed me.
I had some trouble setting up netbeans on Fedora, but mainly because it could not find the JRE directory, once that was sorted out, it installed fine. I tend to use Eclipse for PHP development, and Netbeans for C++ development as I have never really got on with the Eclipse CDT.
Overall:
Ubuntu is defiantly the more polished distro for general use, my specialised needs tend to lean me towards Fedora where im willing to put up with the shortcomings, I also like the faster more responsive feel to the fedora distro.
One final tip, if you are playing with beta software and hit issues, then engage with the community around the distro, they are normaly very responsive, and make sure that any problems you find are submitted as bug tickets, or they will never get fixed. Dont just sit back and wait for somebody else to report the problems...
Tuesday, 21 October 2008
Tuesday, 14 October 2008
Unbricking an aspire one
About 5 weeks ago i bought an Acer Aspire One, fantastic little machine, loaded it with Ubuntu (intrepid Ibix Alpha), 95% of the hardware works out of the box, even my 3UK 3G Modem, boy was I a happy camper.
I wanted the machine to run freemind so could take notes at FOWA which was last week.
Then suddenly disaster struck, I powered it up the week-end before the show only to be greeted with a blank screen and no activity at all. The machine was dead as a dodo.
So I packed it all back into the box and took it5 back to John Lewises in Reading where I had got it from originaly. And to give those guys their due, they where fantastic, they did not quibble, and swapped the machine out for a new one immediately without any hassle, at least one organisation knows about customer service (note they also include an extended 2 year warrentee for free with all items). I will be3 buying all my electronics from those guys in the future.
Anyway back to the story, I spent a frantic week-end reloading all my software and backups (yes I had them) onto the new machine, and headed off to the show. The machine performed fantastically, even managing to handle the wifi connections in the hall, where my colleagues EEE could not cut the mustard.
Then this evening on the train back home tonight, lightening struck twice. I powered down the AAO, realized I had not copied something I want off it onto my pen drive, and went to power it back up again, only to discover the machine had converted itself into a plastic brick again, totaly unresponsive to any prodding, engineer's taps or other incantations.
Dispondant at the thought of having to return it to JL with an explanation that "honest gov, it just broke again" . And negotiate the disdainful looks and insistence that I "must have done something to it", after all it is the second time... I resigned myself to having be without my AAO whilst JL investigated what abuse I had heaped on to the little beastie (again). All whilst feeling like a child molester, an abuser of young innocent netbooks.
However it turns out that this is a known problem, and the AAO even has a built in mechanism for fixing the problem, even if it is lying on its back with its metaphorical legs in the air. An off chance search of the net, looking for other lost souls with terminal aspire syndrome, hoping to find solace in the company of other unfortunates, and a chance to appeal my innocence to a more receptive group sharing this traumatic experience, turned up a post that offered a last chance hope of salvation.
Festooned with dire warnings about, following every step to the letter, and the dire consequences of not doing so, lay a page that made me once again aspire to get my aspire motoring again.
So The Aspire MAY drop its flashed bios occasionally, forcing it to emulate the common house brick, but it has a hardwired loader that will pull a copy of the bios off a usb pendrive and restore it to its former glory, even if the machine is exhibiting no other outward signs of life. The gory details can be found at at Macles Blog. Suffice to say i followed the recipe to the letter, waved the incantations in the air, mumbled the words of power, and breathed life back into my portable building material.
Waiting for the process to finish, and for the machine to restart has been the longest two minutes of my life, but to see the machine spring miraculous back to life, like Lazarus rising from the dead was a thrill worth raving about.
Phew......
I wanted the machine to run freemind so could take notes at FOWA which was last week.
Then suddenly disaster struck, I powered it up the week-end before the show only to be greeted with a blank screen and no activity at all. The machine was dead as a dodo.
So I packed it all back into the box and took it5 back to John Lewises in Reading where I had got it from originaly. And to give those guys their due, they where fantastic, they did not quibble, and swapped the machine out for a new one immediately without any hassle, at least one organisation knows about customer service (note they also include an extended 2 year warrentee for free with all items). I will be3 buying all my electronics from those guys in the future.
Anyway back to the story, I spent a frantic week-end reloading all my software and backups (yes I had them) onto the new machine, and headed off to the show. The machine performed fantastically, even managing to handle the wifi connections in the hall, where my colleagues EEE could not cut the mustard.
Then this evening on the train back home tonight, lightening struck twice. I powered down the AAO, realized I had not copied something I want off it onto my pen drive, and went to power it back up again, only to discover the machine had converted itself into a plastic brick again, totaly unresponsive to any prodding, engineer's taps or other incantations.
Dispondant at the thought of having to return it to JL with an explanation that "honest gov, it just broke again" . And negotiate the disdainful looks and insistence that I "must have done something to it", after all it is the second time... I resigned myself to having be without my AAO whilst JL investigated what abuse I had heaped on to the little beastie (again). All whilst feeling like a child molester, an abuser of young innocent netbooks.
However it turns out that this is a known problem, and the AAO even has a built in mechanism for fixing the problem, even if it is lying on its back with its metaphorical legs in the air. An off chance search of the net, looking for other lost souls with terminal aspire syndrome, hoping to find solace in the company of other unfortunates, and a chance to appeal my innocence to a more receptive group sharing this traumatic experience, turned up a post that offered a last chance hope of salvation.
Festooned with dire warnings about, following every step to the letter, and the dire consequences of not doing so, lay a page that made me once again aspire to get my aspire motoring again.
So The Aspire MAY drop its flashed bios occasionally, forcing it to emulate the common house brick, but it has a hardwired loader that will pull a copy of the bios off a usb pendrive and restore it to its former glory, even if the machine is exhibiting no other outward signs of life. The gory details can be found at at Macles Blog. Suffice to say i followed the recipe to the letter, waved the incantations in the air, mumbled the words of power, and breathed life back into my portable building material.
Waiting for the process to finish, and for the machine to restart has been the longest two minutes of my life, but to see the machine spring miraculous back to life, like Lazarus rising from the dead was a thrill worth raving about.
Phew......
Wednesday, 17 September 2008
Plaxo pulse, blog integration is a bit dubious
I just noticed that plaxo now aggregates content from your blogs and displays it inside the plaxo system. At fav.or.it we operate in this space so we have some experience in what is considered good etiquette here.
1) We automatically scan the blog feeds for creative commons licensing information, and if the license is missing or specifically denies commercial use we publish an extract of the article and a link to the original blog posting. Plaxo does NOT do this at all.
2) We never show advertising against a blog post, the rights to commercial exploitation of content rests with the site that originates the content, Plaxo is taking liberties with peoples copyright by showing Google ads that earn them revenue against each blog post, many folks rely on professional revenue from pro-blogging to provide an income.
3) Plaxo invites comments on each post which it shows against the listing, but unlike fav.or.it, they don't post the comments back to the originating blog, Plaxo is basically fuelling thier engagement on other peoples content, without contributing anything back to the originator.
Poor show plaxo, you should sort this out!!!!.
Hacking the Aspire One
I recent acquired an Acer Aspire One, this little device is fantastic, but is severely limited by the default software it is delivered with. After spending an evening opening it up and dismantling it, in order to stick in a spare 1G ram module that I had knocking around, to take it to 1.5G, I then spent a further evening loading it with the latest release of Ubuntu, and wiping out the linspire linux it arrives with by default.
Software updates include hacking the VodaPhone linux 3G card driver to work with my "3" 3G usb modem, spending hours tinkering with the madwifi wifi driver until it was able to connect to the network at home and at work. and playing with compiz and the Intel 945 video drivers until i had the 3D effects working at full speed.
The Aspire One is an incredible machine, fast, small and compact with very very good performance. Small enough to slip into the poachers pocket in my barbour jacket.
But the best hack I did to this machine, the one that transformed the device from an interesting toy into a usable portable machine, was the simplest, cheapest and fastest to implement.
Sticking two rubber "feet" strips salvaged from the bottom of an old hard drive enclosure onto the two mouse buttons on the trackpad, has transformed the machine, I can now accuratly contol the mouse and click and double click, without having to feel around for the right spot to press down on.
The Aspire One weights in at about £220, and delivers near desktop performance in a small portable package, with good 3+ hour battery life.
Monday, 11 August 2008
Spooks Code 9
Last night I watched the first two episodes of the new Spooks: Code 9 spin off series, and to be honest if this is the standard of british drama that we are to receive in the future then I am very saddened indeed. This series has felt more like the "K9 and friends" spin off from Dr Who, and should probably have been broadcast on CBBC instead of prime time on BBC3. I realize that somebody probably has targets to meet on "youth programming", but this pile of £$%^ is not going to meet that demographic.
This series has more in common with Hollyoaks and Big brother than the wonderful series we thrilled too for 6 seasons that bears its name. As a long time spooks fan I was very very disappointed, and felt that the BBC should do all it can to disassociate this travesty from the quality brand that it had created with original series.
Please BBC, rename this farce and do not cheapen the Brand you have spent so much on fostering in the main spooks series.
Saturday, 12 July 2008
The search for an agile, agile process
Agile is a great methodology, and applied right it can make your development process less like a black box, and more open to the stakeholders. It can do wonders for improving the reputation of your dev team with the rest of your org.
However:
There are a few things that make me nervous about the process, I have worked with it at Yahoo, and seen some the ways its original goals can become subverted.
1. It can become the ultimate micromanagement tool, some aspects of agile are a PM's wet dream, being able to capture and track exactly the number of hours each team member spends on each each task in a sprint, gives the PM a unique view on progress etc. But it can also become very time driven, with team members feeling pressure to wrap up a task quickly to stay inside the timebox. It can make your team feel like a row of battery hens if not managed sensibly.
I prefer not to assign tasks on a number of hours basis, rather on a complexity and resource basis, in reality the time element is only required to aid in the planning stage for sprint, to ensure they don't bite off more than they can chew. Instead we will assign tasks to Tiny, Small, Medium, Large and Huge timeboxes.
2. What is it with the obsession with statistics and graphs that most agile implementations seem to spawn?, if the process is consuming considerable time in itself producing statistics on a continuous basis, then it is not full-filling its aim. If we are spending more time on documenting the process and its progress than actually doing the work, then something is amiss, so for small teams with good internal communications, you should reduce the stats produced to the bare minimum.
A good ticketing/issue tracker is an absolute essential, and if configured right should produce all the reports that are needed without the need to generate any by hand.
Especially if the PM/SCRUMM master is themselves a productive team member producing work towards the sprint goals.
So i'm currently designing a bare bones, pared down, minimalist SCRUMM agile process for fav.or.it, one that hopefully wont hit the pain points listed above.
I'll let you know how we get on.........
Friday, 11 July 2008
Working on fav.or.it
Wow, all i can say is that I haven't had so much fun for years, despite the shock of having to do a two hour commute each way, working at fav.or.it is turning into one of the most interesting episodes of my life.
Im actually writing real code again, and you would not believe how good that feels. The crew at fav.or.it are great, and seem to take challenges in their stride, that would have other teams I have worked with scratching their heads, their enthusiasm for what they are doing is infectious. And they are all defiantly "can do" folks. Nick has a great vision of what and where he wants to go, and we are working flat out to get there.
And the product is great, in the two weeks I have been there, I have been thrust into the middle of a transformation of the site, what is going to come out the other end in a few weeks will be a top notch enhancement to an already top product.
I am lucky having just come into the company after everybody has worked so hard to get the initial beta product launched and running, I can now help to shape the next set of iterations of the design. Already we are adding some very cool features that will give our competitors a run for their money.
Fav.or.it is kind of unique, on the surface it looks like any other blog aggregation product, but scratch the surface and a whole load of good stuff bubbles up, in fact the biggest challenge we will have is educating people that just because the product looks like other sites, that it is not, fav.or.it's big secret is its a two way portal, it does not just gather-up content for you to read, but it allows you to interact with that content, and distributes your comments back to the originating sites. No more jumping around using different blog commenting systems to spread your opinions and observations, you can do it all from one place, and there is more, fav.or.it will track conversations you are having on a host of sites and present them all in one easy to use reader/commenter.
I used to be an avid digg reader, but since coming to work on fav.or.it I have hardly been anywhere near digg.
Anyway stay tuned and I will describe some of the cool stuff we are doing with semantic analysis over the next few weeks.
Sunday, 22 June 2008
Mashup 08
So here i am again at Alexandra Palace, at the BBC/Microsoft mashup 08 event, the 48 hour homage to all things techy and geeky. There is a certain sense of Deja Vue, having been here before at a simular event in 2007. However this time the flavor is different, the presentation a little more polished.
This year there are some fantastic hacks, Ewans "virtual round the world flight", using a marvellous lashup of Google earth, some gaming controls, twitter, and not one but two projectors, coupled with a brave attempt at airframe construction on a grand scale, gave us an intriguing project to marvel at.
The guys from ARM, converted their table into a makeshift electronics workshop, and slaved away all night to create a standalone system for displaying location sensitive webpages.
Finaly I ran into an old friend Toby who I had not seen for some time.
Stay tuned and i will cover some more highlights from this event as it unfolds tomorrow.
Monday, 2 June 2008
Talk on scalability, the cloud and virtual startups
I gave my talk at barcamp yesterday about scalability, startups and using the cloud to completely operate a new company, which seemed to go down well. Whilst running around networking, and having lots of fun meeting up with old and new friends, I also managed to put our new development and staging environment for bejant live (thank god the wifi got fixed :-) ). And later this week we will be shifting all the final pieces of the organisation into the cloud. So we are practising what we preach.
Barcamp's event page is on backnetwork keep checking back on there and I will make the slides available, I also understand that the presentations where being recorded, not sure if mine was, is so i will track down the podcast and get that up too.
Saturday, 31 May 2008
Barcamp London 4 - Saturday afternoon
So I made it to the afternoon sessions, the first session that I attended was about MERB, which is an alternative Ruby based framework, lighter and faster than the Rails stack, which is claimed to achieve much higher performance than rails applications. Given that deployment and performance are Rail's weakest points, the addition of MERB to the toolchest may stimulate development in this area.
Finally Simon Willison gave a fascinating talk about the Google APP Engine, which is a different take on the Elastic Computing Meme that is sweeping the net at the moment, providing some of the capabilities of the systems such as the Amazon EC2 system.
I caught up with some friends from Yahoo, and GCAP which is rapidly acquiring many of the best folks from the old Yahoo Business. Christian, Murray, and Mike shown above, who are definatly showing signs of wear :-)
Stay tuned for more ......
Barcamp London 4
One of the rules of barcamp, is that anybody who attends must give a talk. GCAP is raising the stakes this year by recording all the presentation and making them available on its podcasting network, which is a first for barcamp.
The other purpose of barcamp is the opportunity to network with people in the European web developer community, and barcamp attracts folks from all over Europe. This years event is particularly well attended by people from far shores, due to the concurrent running of "London Web Week" and events such as "@Media", making the week long trip to London an attractive proposition for those that want to cram in a whole years worth of developer conferences in one hit.
I met a fascinating individual ( George Palmer) of idlasso.com who like myself is focused mainly on configuration and deployment issues, and we spent a enjoyable half hour exploring what the ultimate deployment architecture for small start-ups could look like, given the available and upcoming options.
For myself, I am planning to do a talk on our own experiences of using some of these options to develop and deploy bejant.com, a nascent social network site, designed to introduce graduate students to potential employers.
Anyway enough for now, I have to go and soak in more good material well i have access to such a concentration of braincells in one place.
Sunday, 25 May 2008
Freebsd on Amazon EC2?
I spotted a post on peat.org about an indicator that support for freebsd on amazon EC2 might be coming?
This is apparently related to the new support for alternative OS's such as OpenSolaris , if anybody from amazon is listening then:
Please please please please please ........ support Freebsd on EC2, its stability and ease of use in server environments is second to none. I would dump linux in a heartbeat if Freebsd support was available.
Saturday, 24 May 2008
Old Dog, New Tricks
Web Frameworks seem to be the fad of the month at the moment, I have looked at a number of them from the venerable Ruby On Rails, Cocoon, Symfony, CakePHP, Code Ignitor, Grovy and Grails.
However I stumbled across a new one that really took me back this week. Cobol on Cogs , yes that really is a web framework written in cobol. All we need is Assembler on Acid and my week would be complete :-)
Doh........
Friday, 23 May 2008
Stupid Stupid Sun Linux VM installer %$&*%-up
Suns rpm Linux installer includes a number of unversioned resources for common installable packages. So if you attempt to install something like "xml-commons-apis" with rpm or yum, the package manager tries to obsolete the unversioned package, and the only provider it can find is the jdk itself.
So after installing this package, you suddenly discover your JDK has vaporised.
So far there appears to be no known work around, other than to build a new sun jdk package from the src, which does not provide the unversioned packages.
This fault has been around since feb 2007, and has not been fixed, I hit it when trying to install a red5 flash server, and could not get ant to run because it cant find the resource.
Why is Java system configuration so damm hard?
Wednesday, 21 May 2008
Have the City of London Police been infiltrated by Scientology?
Yesterday the guardian ran a story about an anti-scientology demonstrator being prosecuted by the City of London Police (Note: Not the metropolitan police). The demonstrators "crime" was to use the word "cult" to describe Scientology on a placard during a demonstration.
Aside from the obvious free speech issues, the article raises some interesting connections between the City of London Police force and the Church of Scientology, including senior officers accepting gifts from the church, and appearing in promotional videos.
Scientology is a dangerous cult that exploits the more vulnerable members of society, whilst hiding behind laws designed to protect religious freedoms. Indeed in many countries, Uk and Germany notably, the state has stripped this cult of its religion status in order to prevent it exploiting the law, in the same way it attempts to exploit every other aspect of the society it disdains so much. More European countries need to follow suit, to ensure that Scientology cannot establish the legal foothold it has in other regions.
Serious questions need to be asked about the relationship between the City of London Police force and the Scientologists, and efforts should be made to determine just how far this insidious group has infiltrated an organisation that is supposed to serve in societies best interests.
Saturday, 17 May 2008
Amazon AWS - A practical experience - Part 1
Over the last few weeks I have been engaged in migrating a site I have been working on to the Amazon Web Services Environment. I have now got to the point where I feel I can start to write a series of posts about our experiences. This post is an introduction to this series.
Overview.
The site I have been working on migrating is Bejant.com, a LAMP based graduate employment social network that I have been working with for the last 3 months. The characteristics of Bejant are as follows
- PHP 5.2 based
- MySQL
- Apache 2.2
- Memcached
- Centos 5.0
- Swish-e Indexer (for search).
- Video Distribution and conversion. (ffmpeg).
The Amazon Web Services services used in this implementation are:
- Amazon EC2 - Elastic Computing Cloud
- Amazon S3 - Flexible Storage
- Amazon SQS - Message Queueing.
We are also evaluating the Amazon Simple DB as a means of persisting work storage between processes, but work in this area is at a very early phase.
The Runtime environment
Before we dive into the details of how we did this port, lets take a moment to list services that we are attempting to provide.
- 2 Front-end servers
- 2 Database servers
- 1 Test/QA server
- 1 Developer server
- 1 Video Processing Server
- 1 Utility server (ad server, mail-list manager, Feed processing pipeline).
We chose to use the RightScale management environment, which for a monthly fee provides Monitoring, Alerting and instance management and configuration.
I looked at a few other management tools such as Scalr and EC2PHP both of which did not provide enough capabilities to reasonably manage the cluster. It is indeed possible to roll your own, but we felt that RightScale gave us an edge and made creating this complex system setup easier and more maintainable.
RightScale provides the following:
We decided that we wanted to create an environment that supports the full lifecycle of the Bejant.com development activity, which is predominantly SCRUMM based, to that end we wanted a production pipeline that moves releases from Development to Test to Live in an organised fashion. Bejant's sprint cycles operate on a approximately two week time line, during which a number of major and minor feature enhancements are introduced, alongside the usual maintenance and bugfixing activities that are normal for any development team. The Reason for the seperate Test enviroment is to isolate the QA folk from the day to day change that occures on a development system, and allow them to operate thier own database with known test accounts and data.
The challenge here is to make sure that the codebase and database schema are aligned on each stage of the pipeline, with a site such as Bejant that is undergoing rapid development, these elements are often quite different in each stage as new features are added and roled through to production.
To that end we decided that the system would effectively boot each stage from a subversion repository, which would hold branches that reflect the stages in the pipeline.
In my next post we will look at some of the basics of AWS and the facilities it provides.
I looked at a few other management tools such as Scalr and EC2PHP both of which did not provide enough capabilities to reasonably manage the cluster. It is indeed possible to roll your own, but we felt that RightScale gave us an edge and made creating this complex system setup easier and more maintainable.
RightScale provides the following:
- Replicated database solution
- Autoscaling
- Load balancing front ends
- Monitoring and alerting
- Multi-server clusters
- Log file consolidation
- Automated system administration
- Dynamic server configuration
We decided that we wanted to create an environment that supports the full lifecycle of the Bejant.com development activity, which is predominantly SCRUMM based, to that end we wanted a production pipeline that moves releases from Development to Test to Live in an organised fashion. Bejant's sprint cycles operate on a approximately two week time line, during which a number of major and minor feature enhancements are introduced, alongside the usual maintenance and bugfixing activities that are normal for any development team. The Reason for the seperate Test enviroment is to isolate the QA folk from the day to day change that occures on a development system, and allow them to operate thier own database with known test accounts and data.
The challenge here is to make sure that the codebase and database schema are aligned on each stage of the pipeline, with a site such as Bejant that is undergoing rapid development, these elements are often quite different in each stage as new features are added and roled through to production.
To that end we decided that the system would effectively boot each stage from a subversion repository, which would hold branches that reflect the stages in the pipeline.
- The dev instances always boot from the trunk, and reflect the current state of the codebase.
- The Test instances boot from trunk, but are set to a particular revision that is deemed to be "in test", the test engineers can chose which revision to boot an instance from.
- The live system boots from a branch which represents a released product.
In my next post we will look at some of the basics of AWS and the facilities it provides.
Saturday, 5 April 2008
Yet another What open source CMS's need.... (YAWOSCN)
As a professional systems architect and systems integrator, I often ponder the use of open source CMS systems in commercial and semi-commercial websites. Its tempting, good quality, flexible content management solutions, with on-going support and development, at a price that is tempting. Products such as Drupal, TextPattern, Mambo etc. are fantastic products. Couple them with other offerings for managing image galleries, Message-boards, Learning management Systems and a host of other application spaces, its easy to see how you can snap together really cool systems at very little cost.
However there are a few flies in the ointment, that always draw me back.
Single Sign On....
All of these packages usually ship with their own embedded user and profile management system, trying to "integrate" multiple systems together can be a nightmare. You either end up with complex and fragile "bridges" that usually involve maintaining account mapping tables, and copying user data backwards and forwards between disparate systems. Or your end up having to hack a core feature out of the system and replace it with your own interface to the user authentication and authorization system that is shared across the whole site.
What we need is a mechanism for abstracting users and profiles that can be plugged in across multiple systems. Ok, so some will say, well thats what directory solutions like LDAP etc are for, but they only solve part of the solution, putting all the data in one place, they dont help with maintaining a consistent session maintenance across disparate products.
A component SSO solution should handle all activities related to user data, signin, signup, signout, password reminder, alerting (email, im, sms), user to user communications, profile/bio maintenance and visibility controls. And preferably cope with international requirements such as regionally variable data protection rules, multisite replication etc.
One possible solution I have been examining is a new social networking core called Ringside (http://www.ringsidenetworks.com) which handles all of the above, and adds many other components such as groups, contact lists, friends networks etc too. It even supports running of facebook apps inside the core, and has a road map that will encompass OpenID and OpenSocial. It would make an ideal core platform for attaching other applications to as modular components. Its nicely architected, having separated front-end and back-end processes.
If a suitable API or standard can be evolved that gives open source package writers the option to avoid all that user management code and just plug in to a common set of apis, some very very cools setups can be evolved.
However there are a few flies in the ointment, that always draw me back.
Single Sign On....
All of these packages usually ship with their own embedded user and profile management system, trying to "integrate" multiple systems together can be a nightmare. You either end up with complex and fragile "bridges" that usually involve maintaining account mapping tables, and copying user data backwards and forwards between disparate systems. Or your end up having to hack a core feature out of the system and replace it with your own interface to the user authentication and authorization system that is shared across the whole site.
What we need is a mechanism for abstracting users and profiles that can be plugged in across multiple systems. Ok, so some will say, well thats what directory solutions like LDAP etc are for, but they only solve part of the solution, putting all the data in one place, they dont help with maintaining a consistent session maintenance across disparate products.
A component SSO solution should handle all activities related to user data, signin, signup, signout, password reminder, alerting (email, im, sms), user to user communications, profile/bio maintenance and visibility controls. And preferably cope with international requirements such as regionally variable data protection rules, multisite replication etc.
One possible solution I have been examining is a new social networking core called Ringside (http://www.ringsidenetworks.com) which handles all of the above, and adds many other components such as groups, contact lists, friends networks etc too. It even supports running of facebook apps inside the core, and has a road map that will encompass OpenID and OpenSocial. It would make an ideal core platform for attaching other applications to as modular components. Its nicely architected, having separated front-end and back-end processes.
If a suitable API or standard can be evolved that gives open source package writers the option to avoid all that user management code and just plug in to a common set of apis, some very very cools setups can be evolved.
Front-ends and Templating
Almost every system i have examined uses almost completely different mechanisms for handling presentation, some use templates, some use CSS, some are table ridden nightmares, but they are all wildly different, and most are single tier applications where the back-end logic and the front-end page generation are tightly integrated together. In this day and age there is no excuse for this, if you are an author planning an open source product (or any web based product) consider true separation between your front-end and back-end systems, use a Restful interface to bind the two together, and we systems integrators will love you for it, we can chose to take your back-end engine and integrate it directly into any front-end we are using to wrap the web service.
If the Open Source movement could embrace the two principles above, you would see a lot more adoption in business and enterprises.
Almost every system i have examined uses almost completely different mechanisms for handling presentation, some use templates, some use CSS, some are table ridden nightmares, but they are all wildly different, and most are single tier applications where the back-end logic and the front-end page generation are tightly integrated together. In this day and age there is no excuse for this, if you are an author planning an open source product (or any web based product) consider true separation between your front-end and back-end systems, use a Restful interface to bind the two together, and we systems integrators will love you for it, we can chose to take your back-end engine and integrate it directly into any front-end we are using to wrap the web service.
If the Open Source movement could embrace the two principles above, you would see a lot more adoption in business and enterprises.
Thursday, 20 March 2008
Gotcha when upgrading rails on Mac OS X Leopard
One of the first things I did when receiving my new MacBook was to upgrade the Rails installation to 2.0.2, the standard installation of Rails on Leopard is 1.2.6.
Updating is simple, so long as you remember one little "gotcha".
Now here is the gotcha. Rails 2.0.X now has an new element, active resource, which is not upgrade by the standard upgrade path.
If you get the following error then its likely that you hav'nt updated rails properly.
so the solution is to either...
or
You dont need to use --include-dependancies anymore, its the default on the current version of rubygems.
Note also that the standard rails distribution on Leopard does not include a mysql driver, i will produce a post on how to upgrade this later.
Updating is simple, so long as you remember one little "gotcha".
$sudo gem update --system
$sudo gem update
Now here is the gotcha. Rails 2.0.X now has an new element, active resource, which is not upgrade by the standard upgrade path.
If you get the following error then its likely that you hav'nt updated rails properly.
$ rails
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rubygems.rb:379:in `report_activate_error': Could not find RubyGem activeresource (= 2.0.2.9053) (Gem::LoadError)
so the solution is to either...
$sudo gem install rails
or
$sudo gem install activeresource
You dont need to use --include-dependancies anymore, its the default on the current version of rubygems.
Note also that the standard rails distribution on Leopard does not include a mysql driver, i will produce a post on how to upgrade this later.
Tuesday, 18 March 2008
Holy grails on Mac OS X Leopard
In an earlier post I showed how to install "groovy", a "ruby" like language, that runs on top of the Java VM, it is the backbone of the Grails initiative which aims to provide a Rails like environment on the Java Platform, Groovy is to Grails, as Ruby is to Rails.
This time I'm going to show you how to install and setup Grails on leopard. If this post looks simular to the groovy post, then thats because the method for installing the two are very simular, and I have a cut and paste button, and imm feeling lazy.
Now create a symlink to access the current version of grails by, if you download and install another version later, you can just move the symlink to point at it, this is a good practice as it prevents you having to overwrite your old installation.
Finaly add the following to your /etc/profile or ~/.profile depending on wether you want it available for all logons or just your own.
Prerequisites
First make sure you have java and its dev tools setup on your machine see this post on how to do that.
Now download grails from the grails site.
Unpack the archive and move it to /usr/share , for example:
$ sudo mv ~/downloads/grails-1.0.1 /usr/share
Now set ownership and permissions on the directories
$cd /usr/share
$sudo chown -R root:wheel grails-1.0.1/
$sudo chmod 0755 grails-1.0.1/bin/*
Now create a symlink to access the current version of grails by, if you download and install another version later, you can just move the symlink to point at it, this is a good practice as it prevents you having to overwrite your old installation.
$ln -s grails-1.0.1 grails
Finaly add the following to your /etc/profile or ~/.profile depending on wether you want it available for all logons or just your own.
GRAILS_HOME=/usr/share/grails; export GRAILS_HOME
And the following to your path in the same file, note your path should be defined after the above
PATH = $GRAILS_HOME/bin:$PATH; export PATH
Now your ready to try it out and see if it works, open up a terminal and type:
$grails
Welcome to Grails 1.0.1 - http://grails.org/
Licensed under Apache Standard License 2.0
Grails home is set to: /usr/share/grails
No script name specified. Use 'grails help' for more info
$
Cool, it works.
Scaling it up.
Around about this time last year, I gave a presentation at BarcampLondon2 on scalability in low end web services. The presentation was targeted at small startups and sole web developers and aimed to give them some insights into how to harden a site against burst traffic.
Anyway some kind soul thought it was good enough to upload to Scribd , so you can read this now slightly dated presentation here.
Groovy on Mac OS X Leopard
Groovy is the new "Ruby" like language that runs on top of the Java VM, it is the backbone of the Grails initiative which aims to provide a Rails like environment on the Java Platform, Groovy is to Grails, as Ruby is to Rails.
This post will show you how to setup groovy on Leopard, and allow you to explore this intriguing new language.
Prerequisites
First make sure you have java and its dev tools setup on your machine see this post on how to do that.
Now download groovy from the groovy site.
Unpack the archive and move it to /usr/share , for example:
$ sudo mv ~/downloads/groovy-1.5.4 /usr/share
Now set ownership and permissions on the directories
$cd /usr/shareNow create a symlink to access the current version of groovy by, if you download and install another version later, you can just move the symlink to point at it, this is a good practice as it prevents you having to overwrite your old installation.
$sudo chown -R root:wheel groovy-1.5.4/
$sudo chmod 0755 groovy-1.5.4/bin/*
$sudo ln -s groovy-1.5.4 groovyFinaly add the following to your /etc/profile or ~/.profile depending on wether you want it available for all logons or just your own.
GROOVY_HOME=/usr/share/groovy; export GROOVY_HOME
And the following to your path in the same file, note your path should be defined after the above
PATH=$GROOVY_HOME/bin:$PATH; export PATH
Now your ready to try it out and see if it works, open up a terminal and type:
$groovyConsole
The groovy console window should open up as shown above, thats it your done, stay tuned for more groovy news.
Monday, 17 March 2008
Leopard's A+ scorecard with Java
In preparation for looking at groovy, grails, hadoop and hypertable , I decided to get my macbook setup to run ant, maven, junit etc. After a bit of searching and scouring the net, I found a number of tutorials, on how to get this environment running, showing how to download and install this package, and that package, set up this file and add these lines to here and there, until my head was spinning.
However its is a LOT simpler than that. Just install XCode 3.0 TOOLS, from the optional installs section of your Leopard distribution disk and it will install and setup
and it will throw cvs, subversion and a multitude of other tools in for good measure. Unfortunately Xcode 3 wont run on Tiger (10.4.x), where you are stuck with Xcode 2.5, which may not setup all this goodness, but since i dont have a Tiger Machine anymore, i cant verify this.
So: to start the investigation of the java frameworks/platforms listed above, I just added the following to the end of my /etc/profile, to make sure the packages can find the installed goodies.
However its is a LOT simpler than that. Just install XCode 3.0 TOOLS, from the optional installs section of your Leopard distribution disk and it will install and setup
/usr/share/ant -> ant-1.7.0
/usr/share/maven -> maven-2.0.6
/usr/share/junit -> junit-4.1
and it will throw cvs, subversion and a multitude of other tools in for good measure. Unfortunately Xcode 3 wont run on Tiger (10.4.x), where you are stuck with Xcode 2.5, which may not setup all this goodness, but since i dont have a Tiger Machine anymore, i cant verify this.
So: to start the investigation of the java frameworks/platforms listed above, I just added the following to the end of my /etc/profile, to make sure the packages can find the installed goodies.
JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.5/Home; export JAVA_HOME
ANT_HOME=/usr/share/ant; export ANT_HOME
MAVEN_HOME=/usr/share/maven; export MAVEN_HOME
JUNIT_HOME=/usr/share/junit; export JUNIT_HOME
Saturday, 8 March 2008
Ruby off the Rails
In keeping with my new years resolutions, i have been struggling to learn Ruby On Rails, so armed with an existing PHP application that i wished to replicate in rails (as a learning exercise, i do'nt recommend that you seriously consider re-factoring a mature application in rails).
This is a three fold learning exercise, first learning ruby the language, then rails, then how to run and deploy rails on Mac OS X, my operating system of choice.
One of the things i have learnt so far, is that unlike compiled languages, interpreted languages can tie you up in knots, trying to work out why something that seems to run does not do what you want it to do.
I guess the next thing i will have to learn is how to get the ruby debugger running, or i'm going to be even more prematurely bald.
So im running:
Mac OS X leopard on a MacBook, with the Apple ruby. mysqly.ab sourced MySQL 5.0.51, Aptana Studio as my IDE and Mongrel as the web server.
Getting that lot to co-exist was adventure in its own right (and may be the subject of another post), but now i have it working its a pretty slick environment, i reckon in the last 10 hours I have done as much work as would have taken me a week to get running in PHP or C++.
So off to find out how to make debugging work in the IDE, i managed it once, but only after considerable fiddling.
BTW: the new MacBooks are wicked developer workstations, the Black MacBook came with 250G Drive and 2G ram, and is as fast as hell, faster than my 3 gen MacBook Pro that i recently gave up.
Now wheres that debugger documentation :-)
This is a three fold learning exercise, first learning ruby the language, then rails, then how to run and deploy rails on Mac OS X, my operating system of choice.
One of the things i have learnt so far, is that unlike compiled languages, interpreted languages can tie you up in knots, trying to work out why something that seems to run does not do what you want it to do.
I guess the next thing i will have to learn is how to get the ruby debugger running, or i'm going to be even more prematurely bald.
So im running:
Mac OS X leopard on a MacBook, with the Apple ruby. mysqly.ab sourced MySQL 5.0.51, Aptana Studio as my IDE and Mongrel as the web server.
Getting that lot to co-exist was adventure in its own right (and may be the subject of another post), but now i have it working its a pretty slick environment, i reckon in the last 10 hours I have done as much work as would have taken me a week to get running in PHP or C++.
So off to find out how to make debugging work in the IDE, i managed it once, but only after considerable fiddling.
BTW: the new MacBooks are wicked developer workstations, the Black MacBook came with 250G Drive and 2G ram, and is as fast as hell, faster than my 3 gen MacBook Pro that i recently gave up.
Now wheres that debugger documentation :-)
Yahoo Vs Microsoft
Somebody sent me this cool video mashup on facebook. Some of the humor is very "inside" to Yahoo, its very funny none the less. It was made by Amr, who works for the US search team..
Wednesday, 27 February 2008
Fixing Mac OS X Leopards abysmal PHP implementation
One of the unsung updates that occurred on OSX Leopard, was the introduction of Apache2 and PHP 5.2.4 as standard components for web page sharing. When you enable "Web sharing" on the Mac, you are essentially starting up a copy of the Apache workhorse on your system, enabled by OSX's unix underpinnings.
However in true Apple form, they seem to have gone out of their way to make life hard, and the default startup mode for the builtin apache system is as a 64bit application (on intel C2D+ hardware), making it difficult to use the provided PHP implementation "as is", if you need to load any extensions.
The issue stems from the choice of built in support on the Apple delivered php implementation. There are many deficiencies in the manifest of included options, of note is the lack of support for the PHP GD or GD2 module, used in many applications that required graphical output, PDFlib, libtidy and PEAR support are also highlighted by their absence.
Ok, no problem, down load the source for PHP 5.2.4. compile the extensions, and install them i hear you say....
Problem, on most recent Macs, the PHP implementation is also 64bit, a configuration that is not supported out of the box in the standard build options on PHP, and if that was not enough, many of the extensions you might want to install, are dependent on external libraries, not supplied by apple, and whose build environments also do not support creating the 4 way universal libs required to play nice across all the platforms Leopard runs on. Package managers such as fink or macports, hav'nt caught up with the need to supply 4 way fat binaries to cover all the possible platform variations.
Ok, no problem, I found this excellent tutorial on adding GD lib, , however try as i might, i could not get this to load, probaly again due to 64bit/32bit unhappyness. My foo is not enough........ :-( But it may work for you, and if GD is the only extension you need, then bobs your uncle.
Fortunately for those of use with more challenging PHP enviroment requirements Marc Liyange of http://entropy.ch fame has taken up the challenge and is producing a version of PHP 5.2.5 loaded for bear, which solves all of these build intricacies. Currently in beta 6, its works well on both my C2D macs and my G4 Macs.
However one word of caution, marc's installation process assumes that you have not yet enabled the on-board php implementation, if you have then you need to comment out the LoadExtension line you carefully uncommented to enable it, and trust to marc's magic installation of additional included conf files.
Oh, and another notable is that php.ini moves from /etc to /usr/local/php5/etc
More details of beta 6 can be found on this thread. http://www.entropy.ch/phpbb2/viewtopic.php?t=2945&start=0&postdays=0&postorder=asc&highlight=
However one word of caution, marc's installation process assumes that you have not yet enabled the on-board php implementation, if you have then you need to comment out the LoadExtension line you carefully uncommented to enable it, and trust to marc's magic installation of additional included conf files.
Oh, and another notable is that php.ini moves from /etc to /usr/local/php5/etc
More details of beta 6 can be found on this thread. http://www.entropy.ch/phpbb2/viewtopic.php?t=2945&start=0&postdays=0&postorder=asc&highlight=
LMS, CMS time for a merger?
Recently i have been playing around with LMS (Learning Management Systems), this is a way cool technology, and a whole ecosystem has grown up around authoring, editing and presenting courseware.
My main interest is in providing LMS capabilities inside an existing Portal application, hence i have been looking at how to integrate an LMS system and a conventional Portal/CMS system.
Having come from the publishing industry, i'm quite comfortable with the conventional view of publishing, articles, listings and other media. But my foray into the LMS side of things has shown me that there is a whole wealth of other content types that are not the traditional "flat" content. Working with an LMS and the courseware authoring systems, has opened my eyes to the fact that content should be capable of being active.
I think the reason why we don't see more interactive content is there is a lack of embeddable "runtime" components for interactive content. Sure we have flash etc, but they are more production tools, and don't provide the standards and rich ecosystem required for interactive content exchange. Flash essentially locks the content down to a singular presentation format, and does not offer the interchange capabilities that would allow the same interactive content to be rendered adaptively. Flash is essentially similar in concept to PDF which again locks down the interaction and presentation models.
In the LMS world there are a number of emerging standards for "packaging" courseware, one of which is SCORMS. Most OSS and commercial systems now support it as an interchange format for interactive learning material, its rapidly becoming the ".doc" format of learning content.
I'll be writing more as i explore this space in the coming weeks, so stay tuned.....
Tuesday, 12 February 2008
Friends do interesting things
Some friends of mine have just completed the first phase of a site to allow graduates to find jobs with industry leading companies. Anyway you can judge the results for yourself, and if you are looking for a new role, sign up even.
Wednesday, 30 January 2008
What I would like in CSS3 - Save Restore State
Many years ago I was involved in creating Postscript systems, I even wrote my own postscript rendering system at one time (Postbox).
Now I work with websites and web developers, and one of the capabilities I miss from my postscript hacking days is the ability to save and reset the graphics state.
Some background.
I work for a large internet publisher, designing and recommending authoring and production systems for folks producing millions of pages of content a year. Increasingly we are moving towards component based page assembly, where pages are assembled from a library of modules that conform to common interfaces and common presentation standards.
A module may be standalone, or may have to interact or inherit design elements (colour pallets, fonts etc) from the overall page design. Trying to maintain a modular library is a nightmare because of the need to ensure that all variations of the modules on a page are supported by the active cascade running on the page.
Module libraries are relatively easy to manage when you have 10's of modules to support, only a few production staff, and a limited set of design variations between pages. But extend that to 10's of thousands of modules, thousands of production staff, and hundreds of basic site designs and you soon discover that uniquley crafted CSS cascade descriptions are unmanageable.
We have put a lot of research into designing suitable mechanisms for automatically building a pages CSS descriptions from the manifest of modules installed on the page, but this is itself a complex task.
Sandboxing content
Sadly this could all be eliminated if there was a simple way to reset the cascade associated with an id back to the browser default, so that in-line CSS could be used to describe the behavior of the visual area enclosed by the div or span. Coupled with a simple mechanism for referencing particular cascade elements across the reset, to allow the sub-region to selectively pickup characteristics from the main layout, a simple and powerful mechanism would emerge that that would aid the portals in their quest to create dynamic content.
This would also boost the adoption of personalization, allowing complete layouts to be individualy defined for users.
I strongly feel that this should be a capability that is included in the CSS3 specification.
UPDATE: So here we are 8 years after this original post, and the capability i was looking for has finaly made an appearance, though not quite in the way i expected. Checkout the ShadowDom initiative which provides the kind of presentation sandboxing i was seeking. See below for more information.
http://www.html5rocks.com/en/tutorials/webcomponents/shadowdom/
UPDATE: So here we are 8 years after this original post, and the capability i was looking for has finaly made an appearance, though not quite in the way i expected. Checkout the ShadowDom initiative which provides the kind of presentation sandboxing i was seeking. See below for more information.
http://www.html5rocks.com/en/tutorials/webcomponents/shadowdom/
Thursday, 24 January 2008
Nostalgia ain't what it used to be
A recent post on an internal Yahoo maillist, triggered a wave of nostalgia for the old old days of computing.
I was a computer enthusiast during the halcyon days of the early microcomputer hobby scene here in the UK, in particular i was very active in the NatSemi SCMP and SCMP II circles, having built several SCMP based systems and published in Personal Computer World, designs for Memory extensions and Multiprocessor add-ons to the basic Sinclair MK14 microcomputer trainer kit.
Reading though some of the sites that exist to document this era, brings misty tears to my eyes, I still remember the days and nights locked away in what was at that time an old coal storage space in my first flat, that i had converted to a workshop, soldering iron in hand, building more and more bizarre variations.
The most extreme was a system that could have its CPU switched between a SCMP II, a 6502 and a 8080, so that i could run programs published for any of those architectures, without the then high investment in dedicated RAM and IO peripherals for each architecture.
I still remember my then wife, complaining bitterly about the tiny solder beads, and small lengths of wire wrap that insinuated themselves into the living room carpet, and the batik like stains on all my jeans, from the ferric chloride used to etch my own circuit boards.
Ahh those where the days, happy times.
see http://www.mymk14.co.uk for more fun.
Friday, 11 January 2008
Fragile Intel Macs
In total I have 4 Macs, 2 G4's and Two intel machines, all of them have super-drives fitted, I recently discovered that the superdrives on both the Intel machines have become flakey.
I first noticed this when i was trying to install leopard (10.5) on these machines, Leopard weighs in at 7.8G with all options installed, and comes on a DL layer DVD. And was completely unreadable by both intel machines.
The solution turned out to be simple. Using a small external USB drive formated with a GUID partitions (this important people, pay attention, two types of partition, GUID or Apple Partition Map, use the former or it wont boot), I used Disk Utility to copy the DVD to the USB drive using one of the G4 machines by "restoring" it to the volume.
Then it was just a matter of plugging the drive in to the intel macs, and running OS X Install to get leopard installed on the fragile machines.
So whilst i recognize that I'm problaly just the victim of bad luck, but i really do wonder if Macs are as reliable now we have moved to the Intel world.
I first noticed this when i was trying to install leopard (10.5) on these machines, Leopard weighs in at 7.8G with all options installed, and comes on a DL layer DVD. And was completely unreadable by both intel machines.
The solution turned out to be simple. Using a small external USB drive formated with a GUID partitions (this important people, pay attention, two types of partition, GUID or Apple Partition Map, use the former or it wont boot), I used Disk Utility to copy the DVD to the USB drive using one of the G4 machines by "restoring" it to the volume.
Then it was just a matter of plugging the drive in to the intel macs, and running OS X Install to get leopard installed on the fragile machines.
So whilst i recognize that I'm problaly just the victim of bad luck, but i really do wonder if Macs are as reliable now we have moved to the Intel world.
Wednesday, 9 January 2008
New Years Resolutions
As a follower of truly sad traditions here are my new years resolutions:
1. Stop smoking (again).
2. Loose Weight (again).
3. Learn Ruby and Rails.
4. Become more organized.
Trying to be Witty
Recently I have been playing with Wt ( Witty - http://webtoolkit.eu ) which is a fascinating C++ based framework for building web applications, based around the Tolltech QT programming model. Witty does not require you to have QT installed, as it supplies its own frameworks.
The most interesting part of Witty, is the way it totally abstracts the browser interface, allowing you to write apps as though they where standard QT based desktop applications, but running instead on a server with a web browser as a client.
Unless you have a fully supported OS (currently Windows and Linux), Witty is a bugger to install, but after a load of hassle I eventually managed to get it installed on my MacBookPro under OS X 10.4.11.
However Witty presents me with something of a personal dilemma ( My background is in C++ based serverside systems ), I had promised that 2008 was the year that I became an expert in Ruby and Rails, but now I am so tempted to postpone that. My heart says "Yeeeessssss" and is doing airpunches, but my Head says let it go, R&R is the way to go.
Whats an aging technogeek to do?
Spoke too soon - Tempting fate
Having just posted on the utility of the workplace for meeting new and exciting bugs and germs, I immediately succumbed to "the deadly something or other", which laid me out flatter than a garage forecourt.
Its a good job that sleep is the general cure for all these woes, so i have been indulging in a orgy of shuteye.
Which is why i'm up, writing blog posts at 4AM in the morning :-)
Monday, 7 January 2008
Work is sooooooooo bad for your health
Ok, i spend a load of time on the road, and for the last few months i have hardly been in the office or at home. And during that time i have been cough and sniffle free. But the moment i return to work and get to sit in the bullpen with all my germ magnet colleagues, then i immeadiatly collapse into a morass of cold and flu.
Going to work is bad for you, it is a dangerous place to be, its like being back at school and getting every bug and germ doing the rounds.
Maybe we should take a leaf from the book of the hospitals who are trying to combat infection, and place alcohol gel dispensers in convenient places, so that people dont track every bug we come into contact with onto phones, keyboards, and table surfaces.
Kicking and screaming
Ok, so i finally decided to get a blog up and running. I don't know why i resisted so long. pure laziness is the most probable cause.
So my new years resolution for 2008 is to write and maintain a blog, so i can stave off all the snide remarks from my colleagues about not having one.
So who am i?. Im Tim, i work for a big internet company (yahoo), as a solutions architect, i have my nose into everything that can be digitized, run on a computer, downloaded to a computer, burned, ripped, encoded, encrypted, played or created with a computer.
I plan to share some of my adventures on the wire with others of my same ilk.
Subscribe to:
Posts (Atom)