A Blog about Linux, Open Source and Code! 
Symsys Inform Blog Home

Symsys Ltd Text logo in the banner area


Author:  Hollow
June 20, 2010



 

 

After not making any posts for quite some time, I suppose I should start again. The first being about two different things, combined together. Lucid Lynx Ubuntu, with the Elementary-Desktop installed.

I’m a rather avid reader of www.linux-mag.com and as such, I was reading it yesterday morning before heading out of the house. What I was reading, was a post by a certain Christopher Smart. He was talking about how the Elementary Desktop was very good, has made vast improvements on not just Gnome, but applications inside Gnome like Nautilus to name just one.

I decided I would try to install this new desktop and see what all the fuss was about. Mr Smart had mentioned that the whole project started by trying to modify Linux, to look like OSX, so I wasn’t too convinced that I would enjoy the desktop. After all, I’m not OSX’s biggest fan by a long shoot. Yeah the dock is clever, but the rest of the OSX, from a Unix point of view, a power user point of view and a customizability point of view, it’s crap! The good parts about OSX though, are it’s looks.

Elementary is by far the biggest enhancement I’ve seen to any of the major desktops in the last 12 months. Sure KDE4 has grown and grown and gotten more and more usable, gnome has become a little friendlier, both to the eye and to the user, but Elementary has taken Gnome to another level. Simply put, Elementary is what Gnome should have been under it’s own steam. The fonts are incredibly clear, the colour scheme is not only conservative, but it’s attractive as well. Compiz works perfectly with it, to make the Elementary Desktop very similar indeed to OSX in looks, but thankfully not too much so in behavioural qualities. Docky gets installed when installing the Elementary Desktop, which is actually rather cool. I’ve tried a few different docks over the years, just to see if anyone other than Apple could get it working, or not, and up to now, no one really had, not without a great amount of effort at least, which is something that most users do not wish to actually do.

In probably the shortest and simplest summary of a post I’ve ever done, I am going to say, that this desktop, is fantastic, go download and install it!


Filed under: Linux Reviews,Reviews ... Comments (0)


  

 








 

 

After spending a bit of time away from Ubuntu installations and staying with Debian, I finally went back to Kubuntu with Jaunty(9.04) and only because it had KDE4.2 and I’d finally got bored with KDE3.x.x. When KDE4.3 was released I immediately upgraded to it and was extremely impressed, but it simply had something lacking still in Jaunty, I knew it was a backport though, so I waited for the Beta release of Karmic to come out, which I new would have KDE4.3 full-on.

I originally upgraded Kubuntu to Karmic from Jaunty over the net, the install went well and I was pleasantly pleased with my new OS, the only problem is, I was too pleased and I decided to upgrade my laptop as well. The laptop network upgrade did not go well, something happened during the installation (Possibly because I was on wireless while doing the upgrade, who knows?) and it simply failed. This left me with a relatively borked Laptop and only one real option, download the Karmic Koala Kubuntu 9.10 Beta CD and install it from scratch.

Next I encountered more problems, I found the fresh installation (i.e. not an upgrade) so good that I was now disappointed that my upgraded desktop just wasn’t good enough and it required a re-install with a fresh system instead. Now I figured I would download the 64bit version for my desktop, especially since I already had 64bit hardware and that was all about to get upgraded anyway. The 64bit install just went nowhere unfortunately, I’m not sure if the processor was too old for modern 64bit installs (It was a 4 year old AMD Athlon64 Socket939 4000+ after all) or if the processor was damaged, I’m suspecting the former because the processor handled 32 bit installs just fine.

I installed the 32bit system for the time being while I waited for my new hardware and was happy enough, but then my new hardware turned up and the whole ball game changed. I tried to boot up with the existing disk on a new processor, motherboard and RAM, bearing in mind I’d gone from a single core 4 year old processor to a brand new twin core processor, a 4 – 5 year old DFI motherboard to a brand new Asus one and from DDR1 to DDR2 I would have been surprised if it had actually booted, and sure enough it didn’t. Usually Linux tends to handle hardware changes pretty well in my experience, but this just wasn’t going to happen.

What a shame I thought, I’m going to HAVE to install the 64bit system instead, DAMN! ;)

So I set about installing my new 64bit system, completely fresh hardware AND OS, what could be better than that? I am so unbelievably impressed with Karmic Kubuntu that I just can’t express it in words, the problem now, is that the interim upgrades have been suspended now, so I’m left with those last couple of little bugs that are being fixed by Canonical and waiting with baited breath for the full release to happen, so I can download all the Karmicly Krantastic goodies to my system and be even happier!

What’s really new then? Well the interface looks much clearner, much sharper and it just seems “smoother” somehow, programs are faster to load (Even on my older hardware before the upgrade), there are less problems with graphical display errors when using Desktop Effects etc. Kmail is a MASSIVE improvement in my humble opinion, Amarok is just Amarok but the rok part of the name seems to mean even more now if you get my meaning, desktop widgets seem to shine more, have less crashes and be even easier to find and install than ever before. In short I’m completely blown away by Karmic and I couldn’t be happier. Please feel free to share your experiences below, good and bad.


Filed under: Linux Reviews ... Comments (2)

Tags: , , , , , , , , , , , , , ,
  

 





Author:  Hollow
November 28, 2008



 

 

Does a purist attitude hurt the development of Linux?

When you have something like Linux which is available FOR free and IS free, you have the inevitable issues of people taking advantage of the situation and giving nothing in return. This is something that unfortunately is just a fact of life, if you offer something at no cost, people will take it, however if you offer something that is free “as in liberty”, not everyone will want it, nor will they care, if you combine the two you’ll attract both types of people. This is why Linux appeals to so many people already and why it’s user-base and developer-base is expanding daily. You have the people in this world who are idealists and want everyone to be nice to everybody else and be ethical, you have the freeloaders, the cheapskates and the pompous, then you have the realists, like me. Linux manages to cover all these people quite nicely, with Red Hat, Canonical, Debian, Novell and the volunteer only operations.

Red Hat does not have a purist attitude

As far as the purist attitude goes, Red Hat shouldn’t really exist, or if they do, they certainly shouldn’t charge for what they do. In the purist’s eyes, Red Hat is a corporate company, capitalizing on the freedom of Linux and other GPL software, even if they do give back development. The truth is, without corporates, money, trade and a marketplace, the world would almost literally stop rotating, and then we all float off into space and we can’t breath anymore, that’s not a good thing in case you were wondering. To the best of my knowledge, when Linus, Richard and the other Open Source pioneers invented their respective softwares and laid down the rules & regulations that became the GPL and the “mantra” if you will of Open Source Software, there was nothing said about being unable to charge for the software, or for services pertaining to it, the only rule that came close, was that it had to also be available freely and you had to provide the source code with it, along with any copyright notices, the GPL itself and maintain the credits of any original works. So is Red Hat a bad company? Not in my opinion no, they provide direct competition for Microsoft in the server market and they don’t really do all that much in Desktops, except at the “Super Corporate” level.

Is it wrong to be a freeloader?

Not entirely no. Freeloaders are actually required in order to make anything grow, let’s face it, without all the people who use Linux and other GPL software on a day to day basis, that don’t have the knowledge or expertise to do anything other than use it, there would be no reason for the developers to improve things so much, on such a regular basis. This is almost the same as asking if money is a bad thing, it is in many ways the root of all evil, but the fact is, without it, there would be very little motivation for people to make things, invent things or develop existing creation. Now don’t get me wrong, things would still get invented, developed and refined, it’s human nature to do these things, the difference without money and/or demand though, is that the motivation to do these things is much less. Without companies like Canonical and Red Hat for example, who ARE in it for money, regardless of how much or how little they actually make at present, the current state of the Linux Desktop would be NOWHERE NEAR what it is today. If companies like these weren’t out there, constantly trying to develop a product that they can use to make money, in order to compete with the ultimate corporate (MS), the desktop would still get developed, but it would be done at a very slow rate in comparison.

Without compliance there can be no harmony

So the purists believe that we should rid our systems of anything proprietary, that includes NVidia and ATi drivers, it also includes any proprietary software such as Adobe Reader, Flash or Photoshop, not to mention the various other programs that are required to do certain jobs, which I’m sorry, there just isn’t an equivalent for in FOSS sometimes. One of the things Symsys does for it’s customers is produce ready to print images, these can be for anything from articles in a newspaper or magazine, to a flier, brochure, business card, car graphics or even a 50ft billboard advertisement. Unfortunately, as much as we adore FOSS, Linux and all that is GPL, we have absolutely no choice in some of these situations, but to use proprietary software to get the job done. Before the purists jump on their bandwagon of, “You just can’t be bothered to look for the software”, we have, we found some in certain circumstances and it either didn’t do the job at all or it didn’t do it anywhere near well enough, quickly enough or without a severe amount of intervention, now that’s not FOSS fault, it’s not the developers fault, it’s the proprietary software developers fault, for developing industry standards and not giving anything away to the FOSS developers to work with, regardless of your views on how imoral that may be, they have the legal right to do so. Unfortunately if there just isn’t any FOSS available to do the job, we need to find software that will, or turn the job away to a competitor, that just wouldn’t be good business and unfortunately feeding my family is my number one moral.

We use FOSS where and whenever physically possible, we even custom write software occassionally to make it do what we want, but that isn’t always practical, possible or financially viable with deadlines, customer expectations and workloads. The fact that I run dual 19″ screens and our designer runs dual 22″ screens, means that without Nvidia drivers we’d be running one screen each, because getting them working, to a standard that is acceptable, usable and productive, would be completely futile in terms of time, money and resources to make it happen in-house. Instead we use NVidia drivers, we want the functionallity that comes with using Adobe Reader to read PDFs and we want the compatibility that you only get when using Flash player to watch Flash movies, Gnash is great but it just isn’t Flash.

Purists are doing the very thing they supposedly hate

In expecting every single user of FOSS to return something, they are themselves demanding the software be paid for in some way, collaboration and co-operation is imperitive to FOSS and the world itself, the fact is though, so are freeloaders. Without freeloaders there is less demand, without demand, there is very little production, simple. Most FOSS is developed initially, because it fills the purpose of the person developing it, or the person that hired them to develop it, later though it is developed further because a demand exists. The developer chooses to make it freely available to anyone, anywhere, anytime and they also make a conscious decision that they allow people to do what they want with the source code (for the most part), change it, make addons for it, convert it to different platforms etc. and in doing so they also acknowledge that people can go and sell it as a boxed product and make money from it, even though they’ve done nothing but burn it to disc and stick it in a box. People and companies are also free to charge for services pertaining to that software, no matter what, they didn’t have to develop it, change it in any way or even have anything to do with it, up to the point that they advertised the service relating to it. If they didn’t, the adoption and spread of Linux would be far slower, thus lowering demand, thus slowing production and development.

Purism itself isn’t evil, it has the best of intentions

There’s nothing wrong with idealism, there’s is certainly nothing wrong with purism if you can obtain it, however preaching about purism in the open media and insisting that everyone follow your lead, makes you seem very much like a cult/religious evangelist, and certainly makes you seem very immature and unrealistic. Trying to enforce your views and your methods on the rest of the world around you, that just isn’t groovy baby and it certainly isn’t the way of FOSS. The ability to make Open Source business models financially rewarding and viable, is imperetive to FOSS development, as is the ability for people to use it freely, with no investment whatsoever, be that time, money or skills. Without these two primarily and unfortunately “unfree” attributes, FOSS would surely be underdeveloped. To enable the adoption of FOSS and reduce the market share of proprietary software, FOSS must first keep it’s enemies closer than it keeps it’s friends, it must enable it’s users to interact with people that have not yet seen the light, in order that they might see it in the future. When looking at things practically, without the ability to sell services for FOSS, I wouldn’t be in business, without the ability to use NVidia drivers, I wouldn’t be able to use Linux productively and without the freeloaders, Linux wouldn’t even be developed enough for my customers to want to use it so I’d have no target market. The previous paragraph may sound selfish because I’ve used my own business as an example, but I need freeloaders, Linux and FOSS needs freeloaders, Red Hat, Google, Canonical, Novell, they all need freeloaders and so indeed does the world itself. I admire purists for their ability to make themselves completely vendor independant, to make their own choices without detrimental effects to others and having such a clean concience, they need to realise though that not everyone can do that, no matter how much they may want to.


Filed under: Linux Reviews ... Comments (0)

Tags: , , , , , , , , , , , , , ,
  

 








 

 

Over the years I’ve been a helpdesk agent, a telecommunications network administrator, an I.T. engineer, a .NET programmer, a web applications programmer and a Linux evangelist (The last one is a full time job but it doesn’t pay as well as the others). Needless to say that in these roles and over such a long period of time I’ve used my fair share of cloning programs and definitely used more than my fair share of partitioning tools. Clonezilla and GParted have really made an impression on me though.

Let’s just take a scenario where you might use these programs. You’ve installed your Linux Desktop machine, you used guided installation for partitioning, but you were clever enough to make home a seperate partition. Automatically Linux installers tend to assume (On guided partitioning) that you only need 7 – 8GB of space for the main operating system and gives the rest of the drive to swap and home. This is absolutely fine, until of course you install too many programs that require /usr/sbin /opt and /etc, at which point you find yourself quickly running out of space on your main partition and the rest of you’re entire disk is used up with free space.

So given this scenario what do you do? Well if you’ve got a free hard drive you could boot up a Live CD, mount your partitions, create new partitions on your second hard drive for  /etc, /var, /opt, /usr or any other folder that has a lot of data in it and could free up some space by moving the data there, then mounting them with fstab spreading your file system around another one or more disks. The problem with that is that you’re then using up another/more hard drive(s) and worse yet you’re also spreading your file system accross lots of different drives, some people actually choose to do this, it gives multiple points of failure instead of just the one, the problem though is what happens if the drive that holds your /etc /opt and /var folders dies while everything else stays in tact? Or what happens if on a boot up, one of these very important mount points fails. There are pro’s and con’s for it and against it as with everything. My personal choice if the primary drive was big enough, and I had a second drive to move data to temporarily would be ….

Create folders in /media for the data you need to move around, for example mkdir /media/TempHome, then (If your other drive is sdb with 1 partition, formatted as ext3 for example)

mount -t ext3 /dev/sdb1 /media/TempHome,

then

mkdir /media/TempHome/home/

cp -Rp /home/* /media/TempHome/home/

This will copy everything from the /home/ directory (Including other peoples home folders if it’s a multi-user system) to the second drive (sdb1, mounted at /media/TempHome) maintaining all permissions on files and folders so that the same command later, with locations swapped around, will bring everything back, without the need to mess around setting permissions back. The next thing to do is obtain a copy of clonezilla, and burn it to a disc as a bootable CD. Reboot with Clonezilla in the CD drive and once it boots up you have a very self explanatory guide to help you “Clone” your main hard drive (Probably sda or hda on most systems, but it can be different), effectively backing up your existing system, “as-is” without changing anything, to a different drive, network location or removable media as a clonezilla image. This is an important step in the process because when you’re playing with partitions you never know if it’s definitely going to work without breaking things and having that clonezilla image allows you to just boot back up into clonezilla upon finding your broken system, select the image, select the primary drive and hit restore.

So you’ve copied your /home data to a seperate hard drive, you’ve cloned your entire system “as-is” and saved the clone image to the second drive with the use of Clonezilla, what you need to do now, is change your existing partitions around to make them more usable and give your main system more space. Sounds easy right? Well it is. You should have already downloaded GParted (Although I didn’t tell you to yet you should have guessed :D ) and burned it to disc. Stick the GParted CD in the drive and boot up your machine, you’ll automatically boot into a fluxbox interface and the GParted program will start automatically as well. From previous experience GParted doesn’t usually have a problem, deleting, re-sizing or adding new partitions, but everytime I’ve tried to move one, it’s failed on me, so my recommendation is to delete your home partition on /dev/sda6 in GParted(Remember you’ve copied your entire home folder to sdb already AND you’ve cloned the drive, so you’ve double backed everything up already in case of disaster), delete the swap partition as well if it’s in the same extended partition, then delete the extended partition, this shouldn’t take much time at all.

Next on the list is to enlarge your main partition (we’re assuming /dev/sda1) to an appropriate size, now regardless of what size your hard drive is, you don’t want to waste space, and unless you’re a software hoarder (I am) then you probably won’t need an sda1 partition any larger than about 50GB maximum. So let’s re-size that partition in GParted using the GUI, make it 50GB in size (50000MB), this can take a bit of time, as it tends to do two file checks while re-sizing (This is a good thing that lots of partition manipulators don’t do). Once it’s done we can go ahead and create a new extended partition that spans the entire free space on the drive. Next create a new SWAP partition, this needs to be EITHER twice the size of your RAM, so if you’ve got 1GB of RAM you should make SWAP 2GB but I usually just go for 2.5GB no matter what, if you’ve got less RAM you’re more likely to run out and if you’ve got more RAM then it won’t hurt because you’ve likely got a large hard drive anyway. Anyway, once you’ve created the partition, you’ll need to highlight it and choose swapon from the menus in GParted, this will activate the swap partition for your system to use.

Now that you’ve created swap you need to create your home partition again, obviously the size just needs to be the remaining space left on the hard drive, unless you want other partitions on it because you’ve got a 1TB drive or something. That should be set to ext3 when you format the partition. Next you can close the GParted GUI but don’t reboot yet. Click on the console icon on your desktop and type the following commands.

mkdir /media/Home/
mkdir /media/OldHome/

mount -t ext3 /dev/sda6 /media/Home/
mount -t ext3 /dev/sdb1 /media/OldHome/

cp -Rp /media/OldHome/home/* /media/Home/

Now working on the assumption (As we have been throughout) that you’re using sda as your main hard drive, it has /dev/sda1 as the primary partition, then an extended partition of /dev/sda2, then swap is /dev/sda5 (Remember the first LOGICAL partition on a hard drive is ALWAYS 5) and finally /dev/sda6 is going to be mounted as /home on your main system and you had two seperate partitions in the first place for home and your main system, this should be it and you SHOULD be able to reboot.

If however you didn’t have a seperate home partition in the first place and have been following this to perform specifically the task of creating a seperate home partition, you’ll still need to do the following before you reboot. You should now have all your data in the right place, with the right permissions and you should also still have the data backed up in both normal files and the form of a clonezilla image on that second hard drive, so we’re nearly finished. All you need to do now is edit you’re /etc/fstab file on your main system before booting into it, otherwise you won’t be able to login. So follow the remaining commands before rebooting.

mkdir /media/System

mount -t ext3 /dev/sda1 /media/System

cd /media/System/etc/

nano fstab

Then add the following line to it, to make sure it mounts your /home partition on boot:

/dev/sda6       /home           ext3    defaults        0       2

Be careful when editing /etc/fstab, make sure you have the correct partition number (/dev/sda6 for example) and make sure you have the right mount point (/home in this case) and the right file system (ext3 usually but occasionally people use ext2 or other file systems) and you should be fine.

Either way, hopefully you can see from the seriously low amount of command line actions required in this guide, Clonezilla and GParted are excellent tools for people who need to do all the technical stuff, but just aren’t strong on the command line or prefer something graphical, or even if you just don’t have the time to type all the commands, plus it’s far less risky to use a program specifically designed to only issue certain commands and does it with everything unmounted, than trying to unmount the right partitions at the right time, issuing all the commands by hand which could be proned to errors and you’ll still need something like Clonezilla to clone the drive instead of just copying it elsewhere anyway.


Filed under: Linux Reviews ... Comments (0)

Tags: , , , , , , , , , , , , ,
  

 





Older Posts »

Enter your email address:

Delivered by FeedBurner