Hard disks are often the first thing to die in a computer, so it is important to be able to quickly and easily migrate your data from one disk to another without having to reinstall your operating system, OS updates, and all of your applications, restore your files, etc.
Between hard drives that are about to fail or just out of space, I've been reaching for my Clonezilla CD an awful lot lately. It is a bootable CD that gives you the ability to image a drive or partition or clone a drive/partition directly to another (similarly to Norton Ghost). It also has the ability to add a clonezilla bootloader to whatever you clone your disk to, so the clone itself is bootable and ready install your disk image to any number of drives. This is super handy if you want to make a custom restore DVD for, say, a new laptop.
It uses an ncurses interface, so you will have to tab through the fields and use the space bar to press buttons and select options, but it does a good job of walking you through the process and is actually quite intuitive. It can and will save you a ton of time if your hardware is failing but the your software setup is just fine.
I had a somewhat unorthodox use of Clonezilla this evening which prompted me to finally blog about it. I run Windows XP in a virtual machine using VirtualBox, and when I initially set it up (a few years ago) I only allocated 10 gigs to the virtual disk. This evening I went to install some development tools on the XP VM and ran out of space. VirtualBox didn't have the interface for expanding the disk (that I could find; someone please set me straight on this if I am wrong), so I was in a bit of a pickle until I remembered Clonezilla.
I created a brand-new 100 gig VirtualBox virtual hard disk and attached it to the XP VM. Then I mounted the Clonezilla ISO file (which I happened to have lying around) as the CD rom drive and booted the VM into Clonezilla. Clonezilla saw the old 10 gig hard drive and the new 100 gig hard drive, and I simply directed it to clone the entire old disk to the new disk. It took about 15 minutes and when I was done I simply removed the Clonezilla ISO and the old drive from the VM, fired up the VM and I was back up and running (I had to resize the partition after I got back into Windows XP but EaseUS Partition Master will take care of that for you without even rebooting and is free).
Tuesday, October 18, 2011
Thursday, May 5, 2011
Creating the home video, revisited
For Mother's Day this year my wife and I decided to take all of the best, cutest videos of our daughter and compile them into a DVD for our mothers and grandmothers. So we gathered together all of our favorite moments, all of which were a few minutes each and were taken on our cell phones (she has a Blackberry and I have an Android phone).
I used a USB cable to transfer the files to a folder called "raw_footage" on my desktop machine running Linux Mint 10. That's when I noticed that every single video had an extension of 3GP. I was able to play them in Totem, so no problems so far.
I fired up PiTiVi, which has come a long way since the last time I blogged about it. After adding files to the timeline at the bottom, I noticed that the video and audio tracks each have a horizontal red line going through them (the video has the line going across the top, the audio has the line going through the middle). With this bar you can double-click to set a fixed point, then drag the line up or down to fade the audio or video in or out. I used this feature to create fade in and fade out effects for each shot, and to turn up the volume on one of the clips that was too quiet to hear. Sweet!
Everything was coming along swimmingly until I went to render the project. PiTiVi just kept crashing! It would say it was rendering and just hang there, go unresponsive to input or it would start to render and the progress bar would never appear, all the while the estimated time remaining increased indefinitely. Super frustrating! I tried changing output formats but nothing worked. Finally I removed all of the videos from the timeline and went through each video file individually to render it to Ogg Theora. After that PiTiVi would render to whatever format I wanted. Whew!
At this point I'm ready to be done with this project, but I was about to become the next Beowulf, just having defeated Grendel only to have to descend to the underworld to face Grendel's mother.
This next challenge came when I tried to add subtitles and create the DVD. I used Jubler to create the file containing the subtitles, and I used DeVeDe to create the DVD image file itself (along with menus etc). Both Jubler and DeVeDe use MPlayer as a back-end for multimedia support, which makes a lot of sense since MPlayer has been around forever and has a reputation for supporting every video and audio format known to man. So I should have been safe since all of my files are in Ogg Theora now, right? I mean, there's no way MPlayer is going to have difficulty with the most popular free and open source video format, right?
Wrong!
For reasons totally beyond my comprehension, the Gnome MPlayer default isntallation/configuration Ubuntu 10.10 and Linux Mint 10 is totally incapable of playing Ogg Theora out of the box. It failed with a generic error ("MPlayer interrupted by signal 11 in module decode video"), which I Googled for several hours and turned up empty-handed. To this day I have no idea how to get MPlayer to play Ogg Theora files on two of the most popular Linux disros. Totem was able to play them, which tells me that this has nothing to do with availability of codecs. I think it's totally insane that there is nothing I can do to get one of the most popular media players, which serves as a back-end for a lot of A/V editing tools, to handle Ogg Theora. It's Ogg Theora, people! I thought Windows Media Player was the only utility that still couldn't play it!
For all that, the workaround I found wasn't all that bad: I simply used PiTiVi to re-render my videos into MP4 (with the x264 video codec and Lame audio codec), which MPlayer was more than happy to play. Once that was done it was pretty much smooth sailing: I fired up Jubler and created the .ass files that told DeVeDe where to put each subtitle and how long to display it (I used this post as a guide), then DeVeDe created the ISO without a hitch.
I used a USB cable to transfer the files to a folder called "raw_footage" on my desktop machine running Linux Mint 10. That's when I noticed that every single video had an extension of 3GP. I was able to play them in Totem, so no problems so far.
I fired up PiTiVi, which has come a long way since the last time I blogged about it. After adding files to the timeline at the bottom, I noticed that the video and audio tracks each have a horizontal red line going through them (the video has the line going across the top, the audio has the line going through the middle). With this bar you can double-click to set a fixed point, then drag the line up or down to fade the audio or video in or out. I used this feature to create fade in and fade out effects for each shot, and to turn up the volume on one of the clips that was too quiet to hear. Sweet!
Everything was coming along swimmingly until I went to render the project. PiTiVi just kept crashing! It would say it was rendering and just hang there, go unresponsive to input or it would start to render and the progress bar would never appear, all the while the estimated time remaining increased indefinitely. Super frustrating! I tried changing output formats but nothing worked. Finally I removed all of the videos from the timeline and went through each video file individually to render it to Ogg Theora. After that PiTiVi would render to whatever format I wanted. Whew!
At this point I'm ready to be done with this project, but I was about to become the next Beowulf, just having defeated Grendel only to have to descend to the underworld to face Grendel's mother.
This next challenge came when I tried to add subtitles and create the DVD. I used Jubler to create the file containing the subtitles, and I used DeVeDe to create the DVD image file itself (along with menus etc). Both Jubler and DeVeDe use MPlayer as a back-end for multimedia support, which makes a lot of sense since MPlayer has been around forever and has a reputation for supporting every video and audio format known to man. So I should have been safe since all of my files are in Ogg Theora now, right? I mean, there's no way MPlayer is going to have difficulty with the most popular free and open source video format, right?
Wrong!
For reasons totally beyond my comprehension, the Gnome MPlayer default isntallation/configuration Ubuntu 10.10 and Linux Mint 10 is totally incapable of playing Ogg Theora out of the box. It failed with a generic error ("MPlayer interrupted by signal 11 in module decode video"), which I Googled for several hours and turned up empty-handed. To this day I have no idea how to get MPlayer to play Ogg Theora files on two of the most popular Linux disros. Totem was able to play them, which tells me that this has nothing to do with availability of codecs. I think it's totally insane that there is nothing I can do to get one of the most popular media players, which serves as a back-end for a lot of A/V editing tools, to handle Ogg Theora. It's Ogg Theora, people! I thought Windows Media Player was the only utility that still couldn't play it!
For all that, the workaround I found wasn't all that bad: I simply used PiTiVi to re-render my videos into MP4 (with the x264 video codec and Lame audio codec), which MPlayer was more than happy to play. Once that was done it was pretty much smooth sailing: I fired up Jubler and created the .ass files that told DeVeDe where to put each subtitle and how long to display it (I used this post as a guide), then DeVeDe created the ISO without a hitch.
Saturday, April 23, 2011
Backing up with rdiff-backup
I recently tried out rdiff-backup for my backups and really like it a lot. It is a command-line utility written in Python that can operate locally or remotely via SSH. When it is first run, it copies over all of your files to the backup directory. On subsequent backups, it only copies whatever has changed since the most recent backup, updates the mirror, and stores the changes it made to the mirror. The end result is that you always have a fully up-to-date mirror of your files, but at any point you can restore from previous backups. The backup directory consumes minimal disk space, and the backup process is very fast since it is only copying the changes to your computer.
The syntax is similar to the "cp" command: the command itself, followed by the source directory, then the destination (backup) directory, like so:
rdiff-backup /home/jizldrangs /usr/backups
When using SSH, use the server name, followed by double colons and the absolute path, like this:
rdiff-backup /home/jizldrangs fileServerName::/home/jizldrangs/backups
As with many command-line tools, there are a lot of options, most importantly the option to include or exclude certain paths or files. See the examples and man page for details on how to fine-tune your backup.
I put this on my netbook and desktop, which are both running Ubuntu Maverick, but my wife's machine had only the sporadic backups I had made to our USB hard drive, and I wanted a more consistent plan. Fortunately, rdiff-backup, being a Python application, also has a Windows version. If you backing up to a local directory or mounted network drive, you are good to go.
If you want to back up over SSH, it gets a little sticky but it can be done. Using the instructions on this post I downloaded plink.exe (the command-line version of Putty), and created a batch file with the following:
"C:\Program Files\rdiff-backup.exe" -v5 --no-hard-links --exclude-symbolic-links --remote-schema "plink.exe -i rsa.ppk %%s rdiff-backup --server" "C:\\Documents and Settings\Mrs. Jizldrangs\My Documents" mrsjizldrangs@myfileserver::/home/mrsjizldrangs/backups-my-docs
This batch resides in the same directory as plink.exe, which is why the full path isn't specified. Here is a breakdown of the arguments:
rdiff-backup -v5 --force -r now --exclude **/.subversion/** --exclude **/.gvfs/** --exclude **/.local/** myFileServer::/home/jizldrangs/vengeance-backup ~
Here's a breakdown of the arguments:
The syntax is similar to the "cp" command: the command itself, followed by the source directory, then the destination (backup) directory, like so:
rdiff-backup /home/jizldrangs /usr/backups
When using SSH, use the server name, followed by double colons and the absolute path, like this:
rdiff-backup /home/jizldrangs fileServerName::/home/jizldrangs/backups
As with many command-line tools, there are a lot of options, most importantly the option to include or exclude certain paths or files. See the examples and man page for details on how to fine-tune your backup.
I put this on my netbook and desktop, which are both running Ubuntu Maverick, but my wife's machine had only the sporadic backups I had made to our USB hard drive, and I wanted a more consistent plan. Fortunately, rdiff-backup, being a Python application, also has a Windows version. If you backing up to a local directory or mounted network drive, you are good to go.
If you want to back up over SSH, it gets a little sticky but it can be done. Using the instructions on this post I downloaded plink.exe (the command-line version of Putty), and created a batch file with the following:
"C:\Program Files\rdiff-backup.exe" -v5 --no-hard-links --exclude-symbolic-links --remote-schema "plink.exe -i rsa.ppk %%s rdiff-backup --server" "C:\\Documents and Settings\Mrs. Jizldrangs\My Documents" mrsjizldrangs@myfileserver::/home/mrsjizldrangs/backups-my-docs
This batch resides in the same directory as plink.exe, which is why the full path isn't specified. Here is a breakdown of the arguments:
- no-hard-links and exclude-symbolic-links: these are necessary for windows machines per the blog post above
- remote-schema: The method of contacting a remote server (in our case, an SSH server over plink.exe)
- The last two arguments are the source directory and destination (i.e. backup) directory
- i: the name of the ssh key to use for authentication. I created an SSH keypair using PuttyGen, which generates 2 files, a public key and a private key. I added the contents of the public key to the authorized_keys file on the server, and the argument specified above is the private key, which is also located in the same directory as plink.exe and the batch file
- The %%s tells rdiff-backup to run what follows on the remote server
- rdiff-backup --server: This is run on the remote machine and all it does is start rdiff-backup in server mode
rdiff-backup -v5 --force -r now --exclude **/.subversion/** --exclude **/.gvfs/** --exclude **/.local/** myFileServer::/home/jizldrangs/vengeance-backup ~
Here's a breakdown of the arguments:
- v5: Verbosity level 5. The available levels are 1 being the lowest through 9 which outputs so much info that it is impossible to read. 5 is a nice happy medium as it lists the files it is working on.
- force: this is necessary to add when doing a restore to a directory that already has some version of the files you are trying to restore. In my case, the default Home directory created for me by Linux Mint already had some default folders, so I had to force rdiff-backup to overwrite them with the version from my backup.
- r: specifies a restore
- now: tells rdiff-backup when to restore as of (see the man page for alternative options if you want to go to a past backup)
- exclude: tells rdiff-backup that these folders exist in the backup but not to restore them
- the last two arguments specify where to restore from (i.e. the backup directory) and where to restore to (in my case the Home directory, you can change this to restore somewhere else and have access to multiple versions of your files)
Saturday, January 15, 2011
Getting an HP LaserJet 1018 to work under Ubuntu Server
I just rebuilt my old Slackware 11 machine into a shiny, new Ubuntu Server 10.10 (maverick) machine. Wireless printing was the most important function of my old server, so I had to get it working on the new server.
The LaserJet 1018 uses the foo2zjs driver, which works great when installed from the repos (I believe it is part of the default installation so there is no new package to install), however the 1018 requires the computer to provide the firmware. Here is how to do that:
First, get the firmware on your machine. Run:
wget http://foo2zjs.rkkda.com/firmware/sihp1018.tar.gz
Then unpack it, and run a utility to convert the image to a firmware file:
tar zxf sihp1018.tar.gz
arm2hpdl sihp1018.img > sihp1018.dl
Copy it to your /usr directory for safe keeping:
sudo cp sihp1018.dl /usr/share/foo2zjs/firmware/
Run the following to move the firmware to the printer:
cat sihp1018.dl > /dev/usb/lp0
That's it, you should be able to print! Of course, your printer will only hold on to the firmware as long as the printer is powered up; when the printer loses power, it will lose the firmware as well. You don't want to have to run this command every time; instead you should be pushing the task off onto udev by creating a new rule! Cd into /etc/udev/rules.d, create a new file called 80-printing.rules, and put the following inside:
ACTION=="add", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="4117", RUN+="cat /usr/share/foo2zjs/firmware/sihp1018.dl > /dev/usb/lp0"
Save and close the file, then restart udev by running:
sudo service udev restart
And you should be good to go! A big shout-out goes out to the folks on this Ubuntu Forums thread, who filled in the gaps of my knowledge. :)
The LaserJet 1018 uses the foo2zjs driver, which works great when installed from the repos (I believe it is part of the default installation so there is no new package to install), however the 1018 requires the computer to provide the firmware. Here is how to do that:
First, get the firmware on your machine. Run:
wget http://foo2zjs.rkkda.com/firmware/sihp1018.tar.gz
Then unpack it, and run a utility to convert the image to a firmware file:
tar zxf sihp1018.tar.gz
arm2hpdl sihp1018.img > sihp1018.dl
Copy it to your /usr directory for safe keeping:
sudo cp sihp1018.dl /usr/share/foo2zjs/firmware/
Run the following to move the firmware to the printer:
cat sihp1018.dl > /dev/usb/lp0
That's it, you should be able to print! Of course, your printer will only hold on to the firmware as long as the printer is powered up; when the printer loses power, it will lose the firmware as well. You don't want to have to run this command every time; instead you should be pushing the task off onto udev by creating a new rule! Cd into /etc/udev/rules.d, create a new file called 80-printing.rules, and put the following inside:
ACTION=="add", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="4117", RUN+="cat /usr/share/foo2zjs/firmware/sihp1018.dl > /dev/usb/lp0"
Save and close the file, then restart udev by running:
sudo service udev restart
And you should be good to go! A big shout-out goes out to the folks on this Ubuntu Forums thread, who filled in the gaps of my knowledge. :)
Tuesday, June 22, 2010
Whole-disk encryption on Ubuntu
I've had my wife's laptop running whole-disk encryption with TrueCrypt for a couple of years now, and I always wanted to get that level of security on my Ubuntu machine. It really makes a lot of sense for the laptops to have as much privacy protection as possible, since we travel with them and therefore they are at higher risk of being stolen. This week I finally got the opportunity to try it out.
The Plan
I knew I was going to take a performance hit, so I got myself an 8 gig SD card. The root of my Ubuntu installation would go on it, with an unencrypted /boot partition on the hard drive as well as a 60 gig encrypted /home partition to store all of my files. The home partition needs its own encryption key, so I decided on a key file that would be stored on the already encrypted root parition, allowing the OS to automatically unlock the home parition and mount it at boot time.
Installation
I downloaded the Ubuntu Alternate Install disk, since the standard install does not include the option to install logical volumes, which are necessary for thinks like encryption or software RAID. The installation process was somewhat involved as you need to manually configure your partitions. There's simply no way around it as your /boot directory needs to be unencrypted (this is because the software that performs on-the-fly encryption and decryption is a kernel module, and it can't be started if it is encrypted). I tried to configure my /home partition on the 60 gig partition on the hard drive like I wanted, but I kept running into weird problems with the installer so I decided to take my chances installing /home on the SD card with everything else and seeing if I could set up /home on the 60 gig encrypted partition later.
So I ended up with /boot on an unencrypted partition on the hard drive, my swap space on an encrypted partition on the hard drive, and the root directory with everything else (/etc, /usr, /bin, /home, etc) on the encrypted partition on the SD card.
Configuration of the /home partition on the hard drive
Less than 8 gigs of space was just not going to cut it for my /home partition, so I was anxious to get the 60 gig encrypted parition configured.
After much Googling, I learned that encrypted volumes in Linux are configured on logical volumes. This means creating a physical parition, configuring it as a physical volume, adding it to a volumne group, creating a logical volume inside the group, and finally installing a file system inside the logical volume, but fortunately a handy program called cryptsetup takes care of most of that for you. Encrypted volumes use LUKS, the Linux Unified Key Setup along with dm-crypt. One of the nice things about LUKS is that it contains 8 key "slots", meaning that you can have up to 8 passphrases or key files, any of which will unlock the volume. This allows me to have a backup passphrase in case I need to reinstall the operating system or the key file goes corrupt.
So I formatted the 60 gig parition with ext4, and then I ran the following to initialize it as a physical volume for use as a logical volume, giving it the name pvHomeDir:
$ sudo pvcreate /dev/sda7 pvHomeDir
Then I create a volume group called vgHomeDir and added that volume:
$ sudo vgcreate vgHomeDir /dev/sda7
I then ran the following line to create an encrypted volume in that virtual group:
$ sudo cryptsetup -y --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/sda7
It asked me for a passphrase, which I supplied. When it was finished, I unlocked the new encrypted partition with:
$ sudo cryptsetup luksOpen /dev/sda7 pvHomeDir
I then created a 60 gig logical volume called lvHomeDir in the encrypted parition with this:
$ sudo lvcreate -n lvHomeDir -L 60G vgHomeDir
Then I installed an ext4 file system in the new logical volume (note the name and location of the volume; it is located in /dev/mapper and has the name of the volume group prepended with a dash to the name I gave it):
$ sudo mkfs.ext4 /dev/mapper/vgHomeDir-lvHomeDir
OK, at this point the drive is all set up but it needs that key file so that I don't have to enter in two passphrases every time my computer boots. I create a folder under /usr called keyfile, and copy a picture of myself and my daughter into it, and rename it "file". To add the file as a key to the new partition I ran the following (I'm pretty sure this is the command but not 100%; I'm sorry if it is wrong and please post a comment if this line needs correcting):
$ sudo cryptsetup luksAddKey --key-file=/usr/keyfile/file /dev/sda7
Almost done! The partition just needs to be configured so that it will be used for my /home directory. I edited 2 files:
1. In /etc/crypttab I entered:
pvHomeDir /dev/sda7 /usr/keyfile/file luks,retry=1,lvm=vgHomeDir
2. In /etc/fstab I entered:
/dev/mapper/vgHomeDir-lvHomeDir /home ext4 defaults 0 2
I created a directory called crypt in my home directory on the SD card and mounted the volume with the following command:
$ sudo mount /dev/mapper/vgHomeDir-lvHomeDir /home/jizldrangs/crypt
I then restored my home directory to that folder, and when I rebooted, I had my old desktop background and all of my files!
Drawbacks
1. I have noticed a performance hit when booting and every so often when browsing the web in Firefox
2. During the installation process, I chose the "random key" option for my swap space, so there is no way to do true hibernation, where the state of the machine in memory is saved to disk and restored later. Suspend, which is where the computer turns off most components and uses minimal power, still works just fine.
The Plan
I knew I was going to take a performance hit, so I got myself an 8 gig SD card. The root of my Ubuntu installation would go on it, with an unencrypted /boot partition on the hard drive as well as a 60 gig encrypted /home partition to store all of my files. The home partition needs its own encryption key, so I decided on a key file that would be stored on the already encrypted root parition, allowing the OS to automatically unlock the home parition and mount it at boot time.
Installation
I downloaded the Ubuntu Alternate Install disk, since the standard install does not include the option to install logical volumes, which are necessary for thinks like encryption or software RAID. The installation process was somewhat involved as you need to manually configure your partitions. There's simply no way around it as your /boot directory needs to be unencrypted (this is because the software that performs on-the-fly encryption and decryption is a kernel module, and it can't be started if it is encrypted). I tried to configure my /home partition on the 60 gig partition on the hard drive like I wanted, but I kept running into weird problems with the installer so I decided to take my chances installing /home on the SD card with everything else and seeing if I could set up /home on the 60 gig encrypted partition later.
So I ended up with /boot on an unencrypted partition on the hard drive, my swap space on an encrypted partition on the hard drive, and the root directory with everything else (/etc, /usr, /bin, /home, etc) on the encrypted partition on the SD card.
Configuration of the /home partition on the hard drive
Less than 8 gigs of space was just not going to cut it for my /home partition, so I was anxious to get the 60 gig encrypted parition configured.
After much Googling, I learned that encrypted volumes in Linux are configured on logical volumes. This means creating a physical parition, configuring it as a physical volume, adding it to a volumne group, creating a logical volume inside the group, and finally installing a file system inside the logical volume, but fortunately a handy program called cryptsetup takes care of most of that for you. Encrypted volumes use LUKS, the Linux Unified Key Setup along with dm-crypt. One of the nice things about LUKS is that it contains 8 key "slots", meaning that you can have up to 8 passphrases or key files, any of which will unlock the volume. This allows me to have a backup passphrase in case I need to reinstall the operating system or the key file goes corrupt.
So I formatted the 60 gig parition with ext4, and then I ran the following to initialize it as a physical volume for use as a logical volume, giving it the name pvHomeDir:
$ sudo pvcreate /dev/sda7 pvHomeDir
Then I create a volume group called vgHomeDir and added that volume:
$ sudo vgcreate vgHomeDir /dev/sda7
I then ran the following line to create an encrypted volume in that virtual group:
$ sudo cryptsetup -y --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/sda7
It asked me for a passphrase, which I supplied. When it was finished, I unlocked the new encrypted partition with:
$ sudo cryptsetup luksOpen /dev/sda7 pvHomeDir
I then created a 60 gig logical volume called lvHomeDir in the encrypted parition with this:
$ sudo lvcreate -n lvHomeDir -L 60G vgHomeDir
Then I installed an ext4 file system in the new logical volume (note the name and location of the volume; it is located in /dev/mapper and has the name of the volume group prepended with a dash to the name I gave it):
$ sudo mkfs.ext4 /dev/mapper/vgHomeDir-lvHomeDir
OK, at this point the drive is all set up but it needs that key file so that I don't have to enter in two passphrases every time my computer boots. I create a folder under /usr called keyfile, and copy a picture of myself and my daughter into it, and rename it "file". To add the file as a key to the new partition I ran the following (I'm pretty sure this is the command but not 100%; I'm sorry if it is wrong and please post a comment if this line needs correcting):
$ sudo cryptsetup luksAddKey --key-file=/usr/keyfile/file /dev/sda7
Almost done! The partition just needs to be configured so that it will be used for my /home directory. I edited 2 files:
1. In /etc/crypttab I entered:
pvHomeDir /dev/sda7 /usr/keyfile/file luks,retry=1,lvm=vgHomeDir
2. In /etc/fstab I entered:
/dev/mapper/vgHomeDir-lvHomeDir /home ext4 defaults 0 2
I created a directory called crypt in my home directory on the SD card and mounted the volume with the following command:
$ sudo mount /dev/mapper/vgHomeDir-lvHomeDir /home/jizldrangs/crypt
I then restored my home directory to that folder, and when I rebooted, I had my old desktop background and all of my files!
Drawbacks
1. I have noticed a performance hit when booting and every so often when browsing the web in Firefox
2. During the installation process, I chose the "random key" option for my swap space, so there is no way to do true hibernation, where the state of the machine in memory is saved to disk and restored later. Suspend, which is where the computer turns off most components and uses minimal power, still works just fine.
Labels:
cryptsetup,
encryption,
luks,
lvm2,
performance,
security,
ubuntu
Saturday, May 8, 2010
Browse securely from a public WiFi connection with SSH
If you are on the road and jump on your hotel's free WiFi be afraid. Be very afraid. Why? All of your network traffic is being broadcasted in all directions with no security protection whatsoever, and it is insanely easy for anyone to read that traffic using freely available tools. Using ARP spoofing it is possible for a snooper to associate his MAC address with the network's gateway, thereby routing all internet-bound traffic through his or her machine. Even SSL is not entirely safe as an attacker can use SSL's renegotiation capability to trick a server into giving the attacker access to your session.
What's a Linux geek to do? Simple: SSH!
Using SSH you can create a secure tunnel from your laptop to your home computer, and pass all of your web traffic coming from your laptop through that connection to your home computer, then out to the internet. It's simple; here's how it's done:
What's a Linux geek to do? Simple: SSH!
Using SSH you can create a secure tunnel from your laptop to your home computer, and pass all of your web traffic coming from your laptop through that connection to your home computer, then out to the internet. It's simple; here's how it's done:
- Before you leave your home, make sure your SSH server is set up properly (see my instructions on how to harden it). We'll say for simplicity that you have configured your SSH server and all of your clients to run on port 1234. That means that your SSH server is listening for incoming connections on port 1234.
- Log into your router's configuration interface and configure it to forward all incoming traffic that arrives at port 1234 to your SSH server's IP address.
- Go to www.whatismyip.com and write down your public IP address. For our example, we'll say that it is 200.200.200.200.
- When you arrive at your hotel, coffee shop, library, etc., pick a port above 1024 to use on your laptop. The SSH client will listen on that port and forward all traffic to your home machine; for our example it will be port 4321.
- Open a terminal and run the following command: ssh -D portNumber
-N publicIPAddress (in our example, it would be ssh -D 4321 -N 200.200.200.200) . If your SSH client is not already configured to use your custom port, add "-p 1234". After you hit Enter, your cursor will move to the beginning of the next line and sit there blinking. This means that the tunnel has been established and it is ready to start forwarding traffic. Leave the terminal open. - Go into Firefox and go to Edit, then Preferences. When the Preferences menu pops up, click the Advanced icon at the top, flip to the Network tab and hit Settings at the top of the screen. That will pop up yet another window. Put the bullet in "Manual Proxy Configuration" and in the box next to "SOCKS Host" enter "localhost", and enter the port you selected in step 4 (in our example, 4321) into the Port box. Below is a screenshot of what it should look like.
- Hit OK, then Close, and restart Firefox.
- Browse to www.whatismyip.com and verify that it detects your home IP address as the one you are browsing from. If whatismyip.com display's your home IP address, you have successfully configured Firefox to tunnel through SSH!
Tuesday, April 20, 2010
Make web apps feel more like native apps with Prism
As I've mentioned before, I'm not crazy about the idea of web apps. I much prefer the control and integrated experience that comes with desktop apps. That's why I tried Prism, and I am loving it.
Prism is a "Site Specific" web browser built by the folks over at Mozilla. They stripped out all of the menus, toolbars, navigation buttons, etc. that you would normally expect from a web browser because it is intended to be used with a single web application per window. You basically give it a URL when you launch Prism, and that window is dedicated to that site. Each web app gets its own window and all navigation is done through the links provided by the app itself.
This doesn't sound like any big deal, but there are several reasons why I like it:
If you like what you see, I recommend that you give it a shot. You can get Prism from the Ubuntu repositories. It comes with a Firefox extension, and you simply browse to the web page and go to Tools, then to Convert Website to Application. That will pop up a window which will allow you to specify the name, URL, and icon of the application. Most of the time that stuff will be filled out for you and all you have to do is check the Desktop button.
Once the Launcher is created on your desktop (unfortunately Prism does not give you the option of specifying where the launcher will be created) you can copy it to your Gnome panel or create a menu item pointing to it.
As always, I have a few caveats to share:
Prism isn't for everyone; it is intended for people who are looking for a certain kind of control over their web browsing experience. Hope this helps!
Prism is a "Site Specific" web browser built by the folks over at Mozilla. They stripped out all of the menus, toolbars, navigation buttons, etc. that you would normally expect from a web browser because it is intended to be used with a single web application per window. You basically give it a URL when you launch Prism, and that window is dedicated to that site. Each web app gets its own window and all navigation is done through the links provided by the app itself.
This doesn't sound like any big deal, but there are several reasons why I like it:
- I am able to reclaim the extra screen real estate (which is especially important on a netbook).
- For web-based apps, I prefer to have a dedicated launcher. If I am trying to check my GMail, I don't want to launch Firefox, wait for my home page to load, then browse to GMail; that is a lot of steps and I can reduce it to one click with Prism. This also allows me to keep my web browsing separate from my web apps. I generally have about 25 tabs open at a time, so with Firefox dedicated to browsing, I can open and close tabs with reckless abandon and don't have to worry about making sure that "special" tab stays open because if I close it I have to sign in again. Let's face it, Firefox can be a little unstable at times, so if one of my tabs is causing Firefox to flake out, it is nice to be able to close and restart Firefox without losing my Grooveshark music or Gmail.
- The extra window is really nice because it gets its own tab in the taskbar. I know that no matter what I am doing, my web app is only a click away, just like every other app. This offers a great window managing and usability benefit. For a while I was listening to Grooveshark inside Firefox, and whenever someone came up to talk to me, I had to spend a painful 5 to 10 seconds finding the Firefox window on my taskbar, waiting for it to maximize, flipping to the Grooveshark tab, and clicking Pause. It is downright rude to keep someone waiting that long while I pause my music, so I welcome the chance to cut down on that time.
If you like what you see, I recommend that you give it a shot. You can get Prism from the Ubuntu repositories. It comes with a Firefox extension, and you simply browse to the web page and go to Tools, then to Convert Website to Application. That will pop up a window which will allow you to specify the name, URL, and icon of the application. Most of the time that stuff will be filled out for you and all you have to do is check the Desktop button.
Once the Launcher is created on your desktop (unfortunately Prism does not give you the option of specifying where the launcher will be created) you can copy it to your Gnome panel or create a menu item pointing to it.
As always, I have a few caveats to share:
- Prism uses the same process name regardless of how many windows are open, so if you create separate launchers and add them to a dock like Avant Window Navigator, it might get kinda confused.
- The Firefox plugins are not available to Prism, so if you are accustomed to the luxury of such plugins as AdBlock Plus, NoScript or any of the thousand other plugins for firefox, you will have to decide how important they are to the particular web app you want to create.
Prism isn't for everyone; it is intended for people who are looking for a certain kind of control over their web browsing experience. Hope this helps!
Subscribe to:
Posts (Atom)