mugshot, larkspur

g13n


Gopal Venkatesan's Former Journal

My writings on free/open source software and other technologies


wmii: a small X11 window manager
mugshot, larkspur
g13n

This content has been updated and moved to a new place.


In search of a better X11 window manager which I've been doing constantly, I stumbled upon WMII two weeks ago. I should admit that it is by far one of the best I've ever used. While previously I was a fan of Enlightenment DR17, still I found I was missing something.


What makes WMII stand-out from the rest?



  • It is very small and light-weight window manager

  • It is a dynamic window manager

    • There is no concept of "static" workspaces

    • Windows can be stacked, tiled or displayed in fullscreen

    • Workspaces can be created dynamically on-the-fly when needed


  • It can be completely controlled using keyboard shortcuts

  • It exposes a virtual filesystem (9P) similar to the procfs of Unix and Unix-like operating systems


Installing WMII


Most GNU/Linux distributions except Gentoo doesn't package WMII. You can obtain the source package from its download page. Though in beta, it is recommended to use the 3.9 release.


Once the source is obtained, run "make install" to install the binaries (yes, there is no configure step.)


shell> tar xjf wmii+ixp-3.9b1.tbz
shell> cd wmii+ixp-3.9b1
# Edit config.mk file to customize, at the minimum check the PREFIX variable
shell> make
shell> sudo make install

On Gentoo, the WMII package is masked, so to install the package must be unmasked.
shell> sudo sh -c 'ACCEPT_KEYWORDS=~x86 emerge --pretend wmii'
# Inspect if things are fine, if yes proceed to install
shell> sudo sh -c 'ACCEPT_KEYWORDS=~x86 emerge wmii'

Configuring WMII


WMII is entirely configurable through shell scripting. As a beginner my advice would be to just copy the wmiirc to $HOME/.wmii-3.5. Depending upon your install path, it may be located under /usr/local/etc/wmiirc or /etc/wmii-3.5/wmiirc on a Gentoo default install.


To get started, the default accelerator/trigger key is bound to Mod4. On most systems it is bound to the Windows-key or the Super_L key. This can be configured to a different key by editing the wmiirc and changing the MODKEY variable.


One of the most useless key according to me is the "Caps Lock" key. So on my computer, I've changed the MODKEY to use the "Caps Lock" key. If you're interested in doing so, you can follow the steps below:


The secret of binding MODKEY to "Caps Lock" is to make "Caps Lock" the Mod3 key. For this you can use xmodmap(1). In your X11 client startup file (either $HOME/.xinitrc or $HOME/.xsession) invoke xmodmap(1) to set "Caps Lock" emit the Mod3 sequence.


Here's how you can invoke xmodmap(1) from your startup file:


# $HOME/.xsession
test -f $HOME/.Xmodmap && xmodmap $HOME/.Xmodmap
# ...

Here's how you can map Caps_Lock to Mod3:


# $HOME/.Xmodmap
remove lock = Caps_Lock
add Mod3 = Caps_Lock

With the above in place, you set the MODKEY variable in your .wmiirc file appropriately to do the trick.


# $HOME/.wmii-3.5/wmiirc
# ...
# Configuration Variables
MODKEY=Mod3
# ...

Voila! you're done. To open an XTerm, hit Caps_Lock + Enter. Coming back to the virtual filesystem part of it, from the XTerm that you opened now, try wmiir read /client/sel/label. You should see the title of your terminal. For more information, I recommend reading up WMII documentation.


Setting up Oozie on your personal computer
mugshot, larkspur
g13n

This content has been updated and moved to a new place.

This blog post is also on Map-Reduce following setting up Hadoop on your personal computer. This post describes the step by step process of setting up Oozie on your personal computer running OpenSUSE GNU/Linux. OpenSUSE is just used as an illustration, in fact you can set up Oozie on any other distribution by modifying the process of installing the dependency packages.

Oozie is a workflow system that runs on Hadoop and is specialized to run Hadoop/Pig jobs.

Installing the pre-requisites

To setup Oozie (with HSQLDB), the following requirements need to be met.

  • Java 1.6+ (preferrably Sun Java)
  • Hadoop (0.18.3, 0.19.1 or 0.20.0)
  • Pig 0.2.0
  • Tomcat 6.x (preferrably 6.0.18)

If you have not setup Hadoop and Pig on your personal computer, follow these instructions for getting them installed. A bit of caution here, the Oozie distribution currently works only with above said versions of Hadoop and Pig 0.2.0. So while following my earlier post for installing Hadoop and Pig, install Pig 0.2.0 instead.

Tomcat can be installed by running the following command:

shell> sudo zypper in tomcat6

Installing Oozie

Grab the Oozie distribution that works with Hadoop 18 (0.18.3). Unpack the distribution under a directory say $HOME/bin and set the PATH environment variable to include $HOME/bin/oozie-0.18.3.o0.1-SNAPSHOT/bin.

The following section assumes you're running a Bourne-compatible shell like GNU Bash.

shell> wget http://issues.apache.org/jira/secure/attachment/12409973/oozie-0.18.3.o0.1-SNAPSHOT-distro.tar.gz
shell> cd $HOME/bin
shell> tar xzf oozie-0.18.3.o0.1-SNAPSHOT-distro.tar.gz
shell> PATH=$HOME/bin/oozie-0.18.3.o0.1-SNAPSHOT/bin; export PATH

If Tomcat is running stop it. Copy the oozie.war file from the distribution root directory to Tomcat webapps directory and start Tomcat.

shell> sudo /sbin/service tomcat6 stop
shell> sudo cp $HOME/bin/oozie-0.18.3.o0.1-SNAPSHOT/oozie.war /srv/tomcat6/webapps
shell> sudo /sbin/service tomcat6 start

Testing if everything went well

That's it, the installation is over. To do a quick sanity check open your favorite web browser and hit http://localhost:8080/oozie. Oozie assumes that Tomcat is running on port 8080, if it does not you will have to set the CATALINA_OPTS environment variable before starting Tomcat and use the same for checking.

Assuming Tomcat is running on localhost:8888, you would set CATALINA_OPTS to:

shell> export CATALINA_OPTS="-Doozie.base.url=http://localhost:8888/oozie"

If the setup was successful, you should see the Oozie web console upon hitting http://localhost:8080/oozie.


Setting up Yahoo! Calendar with Evolution
mugshot, larkspur
g13n

This content has been updated and moved to a new place.


I use (Ximian) Evolution for mail and calendar at work and home. While my e-mails get synchronized easily getting the calendar to work is a pain. Yes, Evolution doesn't work with Exchange Calendar yet. So the only way is to use another shared calendar for this. An year back, Yahoo! Calendar was completely revamped with a new user interface and came with CalDAV support as well. To try these features you need to use the Yahoo! Calendar beta by clicking on this link (I promise there's no XSS.)

Configuring Evolution with Yahoo! Calendar


Coming back to the topic, I have been trying to configure Evolution to work with Yahoo! Calendar for quite a while without success. By no means today seemed like my lucky day but, I was finally able to configure Evolution to work with Yahoo! Calendar beta.
Here's how I did it:

  • Choose File > New > Calendar from Evolution menu bar.

  • Choose Type is CalDAV (in case its not automatically selected for you)

  • Choose any Name (I chose Public Calendar)

  • Choose some Color, and set as default calendar if you wish

  • In the URL field enter the following: caldav://caldav.calendar.yahoo.com/dav/yourusername@yahoo.com/Calendar/calendarname/

    • In case your e-mail domain is not "@yahoo.com" use the appropriate one

    • Don't forget the trailing slash (/) character



  • Select the Use SSL checkbox

  • Enter your username (yourusername@yahoo.com - yes with the domain name) in the Username field

  • Choose an appropriate Refresh period

  • Save the form by clicking on the OK button


and voila, it should work.

Hadoop and Pig on your personal computer
mugshot, larkspur
g13n

This content has been updated and moved to a new place.

Unless you're on a different planet Map-Reduce is not a new term in your vocabulary. Hadoop is an open source Map-Reduce framework implemented in Java for processing large amounts of data in parallel. Although the framework is implemented in Java, the Map-Reduce applications need not be written in Java.

Setting up Hadoop on your personal computer

In this section, we will try to setup Hadoop on a GNU/Linux machine in a pseudo-distributed mode setup. In such a setup, each Hadoop daemon will run in a separate Java process.

Requirements

At the minimum, you need Java 1.6.x (Sun is preferred), sshd, and rsync. If you're running OpenSUSE, you can install all of them by running the following command:

shell> sudo zypper in java-1_6_0-sun openssh rsync

Once you have the pre-requisites, you can download the Hadoop distribution from one of Apache's Download Mirrors. At this time, since we're planning to use Pig with Hadoop, be sure to download one of the 0.18 versions of Hadoop only. This is because at the time of this writing, the latest version (0.3.0) of Pig works only with Hadoop 0.18.x versions.

Installing Hadoop

Unpack the downloaded Hadoop distribution. If you have not installed Java under its standard path, edit /path/to/hadoop/conf/hadoop-env.sh and set JAVA_HOME to the root of your Java installation. Next edit /path/to/hadoop/conf/hadoop-site.xml and within the configuration section paste the following lines:

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:9000</value>
</property>

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:9001</value>
</property>

Starting Hadoop

Check if you can ssh to your machine without a passphrase:

shell> ssh localhost

If not, run the following commands:

shell> ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
shell> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Format a new distributed file-system:

shell> ./bin/hadoop namenode -format

In case you get an exception like INFO util.MetricsUtil: Unable to obtain hostName java.net.UnknownHostException: check if your hostname has an entry in /etc/hosts file.

shell> grep `hostname` /etc/hosts

Now that you're done with configuring, you're ready to start the Hadoop daemons.

shell> ./bin/start-all.sh

If everything went well, you can check the list of files under the newly created HDFS.

shell> ./bin/hadoop fs -ls /

You'll see the output similar to the one below:

Found 1 itemsdrwxr-xr-x   - yourunixusername supergroup          0 2009-09-05 22:10 /tmp

A Quick Test to find out if everything is fine

Open your favorite browser and hit http://localhost:50070/. If everything went fine, you should see that there's one live datanode. If not there is a problem. If you have downgraded from your previous version of Hadoop (to be able to run Pig), then you may notice ERROR org.apache.hadoop.dfs.DataNode: org.apache.hadoop.dfs.IncorrectVersionException: Unexpected version of storage directory in the datanode log file. If it is so, you'll have to stop the Hadoop cluster, physically remove the Hadoop data directory (typically /tmp/hadoop-yourunixusername) and run the setup (namenode -format) again.

Pig

Pig is a platform for analyzing large data sets. Pig Latin is a SQL-like language that lets you specify a sequence of data transformations (split, join, filter) over large sets of data. The Pig engine compiles Pig Latin into Map-Reduce to be run on Hadoop.

That was a quick overview about Pig. In the following section, I'll show you how to setup Pig on the locally running Hadoop cluster.

Setting up Pig to run in Map-Reduce mode to use the local Hadoop cluster

Assuming Hadoop is already installed, we have satisfied the basic requirements to run Pig. Pig can be downloaded from one of Apache's Download Mirrors.

Unpack the downloaded distribution. The Pig launcher script is located under the bin directory. To use Pig with the installed Hadoop cluster, you need to set the PIG_CLASSPATH variable to the Hadoop's conf directory and set the PIG_HADOOP_VERSION to the appropriate Hadoop version.

For example, in my system, Hadoop is unpacked (installed) under $HOME/bin/hadoop, and the installed version is 0.18.3. So, I have the following launched script for launching Pig.

#!/bin/sh
PIG_PATH=$HOME/bin/pig-0.3.0
PIG_CLASSPATH=$PIG_PATH/pig-0.3.0-core.jar:$HOME/bin/hadoop-0.18.3/conf \
PIG_HADOOP_VERSION=0.18.3 \
$PIG_PATH/bin/pig $@

Running pig should print something like:

2009-09-05 23:17:41,113 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost:9000
2009-09-05 23:17:41,428 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: localhost:9001
grunt>

To test, you can run the ls command in the grunt prompt to check if things are fine.

shell> pig
2009-09-05 23:22:13,845 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost:9000
2009-09-05 23:22:14,197 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: localhost:9001
grunt> ls /
hdfs://localhost:9000/tmp	<dir>
grunt>

This is my first post on this topic, going forward I'm planning to post more articles, until then, keep Grunting!


Emacs and GNOME
mugshot, larkspur
g13n

This content has been updated and moved to a new place.

If you are an Emacs user running GNOME as your window manager, one of the annoyances that you might have experienced is that they don't play well.

Quite frequently when switching from Emacs to another application (Alt+Tab) or switching desktops when Emacs has focus would simply make Metacity unresponsive. One of the workarounds that I had used was to switch to a different virtual console (Ctrl+Alt+F8) and back. This is quite annoying.

When searching the Internet, I came across this bug with Accessibility (Assistive Technologies) on GNOME Bugzilla. The real issue is that when Assistive Technologies is "on", it interferes with the way Emacs interacts with X and that is causing the problem.

The workaround is to turn "off" Assistive Technologies till this bug is fixed.
If you're running OpenSUSE Desktop, you can turn it "off" by unchecking the "Enable assistive technologies" from "Assistive Technologies" under "Personal" group in "Control Center". Of course, for changes to take effect, you need to logout and login again.

Chrome^Hium on GNU/Linux
mugshot, larkspur
g13n

This content has been updated and moved to a new place.

In one of my earlier blog posts, I had mentioned why I'm moving away from Gecko and moving to Epiphany/WebKit for my browsing. But one of my most annoying things which made me use some other web browser was its inability to work with secure web sites (https.) So I was looking around for options, until it struck me I can also use Google Chrome since it was also being ported to both Mac and GNU/Linux. For people who don't know the fact, the underlying rendering engine behind Chrome is WebKit.

Getting Google Chrome for GNU/Linux

The easiest option for getting Google Chrome for GNU/Linux is to grab the binary from one of the pre-built daily snapshots. Unfortunately, on many of the distributions (including the RedHat and SuSE ones), the binary doesn't work out of the box because it links to different libraries. So, to get Google Chrome working on OpenSUSE (version 11.1), you need to have the following packages installed as a dependency and link the libraries to the ones that Google Chrome expects.

Chromium on GNU/Linux

Packages needed on OpenSUSE

  • mozilla-nspr (provides libnspr4.so, libplds4.so, libplc4.so)
  • mozilla-nss (provides libnss3.so, libnssutil3.so, libsmime3.so, libssl3.so)

Installing is as simple as running sudo zypper install mozilla-nspr mozilla-nss.

What are the missing libraries/linkages?

Running ldd /path/to/downloaded/chrome | grep 'not found' returned me the following output.

	libnss3.so.1d => not found
	libnssutil3.so.1d => not found
	libsmime3.so.1d => not found
	libssl3.so.1d => not found
	libplds4.so.0d => not found
	libplc4.so.0d => not found
	libnspr4.so.0d => not found

Running zypper what-provides libnss3.so.1d or even any of the above missing libraries returns nothing!

The Dirty Trick

When I'm bumped with such situations, the simplest thing I try is to run one of the shell procedures that I had written ages ago which just tries to locate a similar library (may be a newer/earlier version) and symbolically link it as the "expected" library. In all probabilities if there isn't a binary incompatibility, it would work like a charm. So, I tried it, and voila! it worked!

All I had to do was just run find-and-fix-missing-libs /path/to/downloaded/chrome.

The Shell Procedure that I'm using

The shell script is about 50 lines of code and can be downloaded from my samples directory on GitHub.

How is Chromium?

As expected the browser still has a lot of rough edges, and probably not suited for daily browsing (nevertheless I use it.) But it is way better than Epiphany in terms of stability and support for protocols, and one of my favorite options (being a paranoid) - private browsing :)


Became a Gum-ball Yahoo
mugshot, larkspur
g13n

This content has been updated and moved to a new place.


Yesterday was one of the memorable days in my life.


I couldn't be more surprised, time flies, it was 5 years ago that I joined one of the companies that I was dreaming of stepping in. Yes, there were happy moments, sad moments, surprises, shocks, but nevertheless, it is still one of my favorite companies, and would remain so in the coming years!


I would like to thank all of them who has helped me in traveling this far in my life. And yes, thank you fellow Yahoos.

Tags: , ,

A simple browser-based presentation system
mugshot, larkspur
g13n

This content has been updated and moved to a new place.


I do a lot of trainings, give tech talks both within the company and outside the company. Most of the times I used to create presentations using OpenOffice.org (at home) or Powerpoint when I had the Macbook Pro work laptop. Even though I used one of these tools, my presentation files would be exported and distributed as PDF.


Two years ago, I stumbled upon browser-based presentation tools like S5 and Slidy. I realized the importance of such tools since it conveyed a very important point--"the browser is the platform". If the presentation (format) is just plain HTML, then:



  • Creating presentations are a breeze

  • Distribution is also quite simple, it can even be hosted on a web server and be viewed from any computer with a web browser


Not that S5 is bad, I liked Slidy very much. I even started to convert my existing presentations in Slidy format, and I got so impressed with the tool, that I even sent a patch to the developer.


Then I started noticing more problems with Slidy, and also wasn't quite impressed by the way the code is organized. So I thought why not create a presentation system that follows the same philosophy as S5 or Slidy, but yet much better than these tools.


An idea struck to me that I could quickly whip a simple presentation system using YUI Carousel. I started putting together the pieces and to my surprise a simple presentation system with animation for slide transition was written well within 20 minutes! I cannot control my joy, it is an awesome experience.


If you're interesting in downloading the template, you can grab it from GitHub.


Blogged with the Flock Browser

Epiphany with WebKit
mugshot, larkspur
g13n

This content has been updated and moved to a new place.

Goodbye Gecko, Hi WebKit

I have been a fan of Gecko and was using it since Mozilla M14 Milestone release. My initial experience with Mozilla was simply amazing, because I was annoyed with Microsoft Internet Explorer, and Netscape Communicator was much worse for it was very slow in rendering pages. Then came Firefox. I was one of the early users and was using it when it used to be known as Firebird. The 1.0 release was a much anticipated release, and it lived up to its expectation. When Firefox 1.5 introduced tabbed browsing, it was one of the most hyped features because most browsers didn't have it, even though Opera users were enjoying it for long ;)

Slowly, I started noticing some problems with the 2.0 release of Firefox, it had horrible memory leaks, and hence would make my computer crawl at times, chugging memory like mad! Then came Firefox 3.0 release. Firefox 3.0 claimed to be a better release with better memory management and many other (developer friendly) features. While I did notice that the memory consumption is low, it wasn't good either.

In the meantime, I was trying out lot of other browsers on different operating systems. At work I use Leopard, and at home I use Debian GNU/Linux. On the Mac, I was happy with Safari; I noticed that it was faster in rendering, and at the same time consumed less memory. All because the recent versions of WebKit, which is the rendering engine behind Safari, Google Chrome was getting better and better both in terms of speed, simplicity and in terms of supporting web standards. At home I was using Epiphany for web browsing since I didn't have separate profiles for work and non-work related browsing in Firefox. The current version of Epiphany that is bundled with GNOME on Debian GNU/Linux uses Gecko as its rendering engine, but nevertheless it was better than Firefox in terms of memory usage. While this can be attributed to the extensions that I have with Firefox, I was using quite a few extensions with Epiphany too. So, gradually I was inclined towards WebKit and started to avoid Gecko for anything but testing. The next version of Epiphany will be using WebKit according to this announcement. While this annoucement itself is quite old, because I was happy with WebKit, this announcement got me excited. So, I started compiling the latest version of Epiphany against WebKit because there weren't any beta releases of their upcoming release.

Compiling Epiphany with WebKit

Here we go. If you wish to compile the latest Epiphany (2.27.1 as of this writing), you begin the process by satisfying the following dependencies:

If you're like me, having a stable version of all these packages (but an older version since it comes bundled with your operating system), you can of course set up all these libraries under a different directory. Usually, /usr/local, or /opt is chosen, but I chose /usr/sfw. If you wonder why, I like the Solaris file system layout very much, which is why I chose this directory.

The order of compilation of these dependencies are the same as specified above. The steps are identical to compile the first three libraries. So for each of the libraries, the following steps would have to be performed:

# Download the source, and extract them (using whatever archive expander your operating system provides), then for each of the library, do the following steps
shell> sh configure --prefix=/usr/sfw
# Assuming the software is configured to be built
shell> make
# If everything is a success ...
shell> sudo make install

Compiling WebKit is also same, except that by default it compiles HTML 5 video support. If you wish to have this support you need gstreamer plugins. I didn't care to have this support since even without this WebKit can play videos. So, the following steps would build WebKit into the same /usr/sfw directory.

# Download and extract the nightly WebKit source tarball
shell> sh configure --prefix=/usr/sfw --disable-video
# Assuming the software is configured to be built
shell> make
# If everything is a success ...
shell> sudo make install

Finally, all the dependencies are satisfied for Epiphany itself to be built against WebKit. One word of caution is that this is a very unstable version of Epiphany, so we would not replace our original Epiphany with this, but we would be building and installing this into a separate directory. For building Epiphany, since the dependencies are installed in a different directory, it is necessary to make the newly installed directory searchable for include file path and library paths. So, lets start by configuring the build environment for building Epiphany against WebKit.

# Make sure that /usr/sfw is given precendence in include and library paths
# Since Debian GNU/Linux installs the dbus include files into a non standard directory, we'll have to fix it
shell> CPPFLAGS="-I/usr/sfw/include -I/usr/include -I/usr/include/dbus-1.0"
shell> LDFLAGS="-L/usr/sfw/lib -L/usr/lib"
shell> LD_LIBRARY_PATH=/usr/sfw/lib:/usr/lib
shell> PATH=/usr/sfw/bin:$PATH
shell> PKG_CONFIG_PATH=/usr/sfw/lib/pkgconfig:/usr/lib/pkgconfig
shell> export CPPFLAGS LDFLAGS LD_LIBRARY_PATH PATH PKG_CONFIG_PATH

The build environment is ready for Epiphany to be built against WebKit now. I chose to install Epiphany into /usr/sfw/epiphany. The source for the latest version of Epiphany can be downloaded from here.

# Download and extract the source archive
shell> sh configure --prefix=/usr/sfw/epiphany
# Assuming the software is configured to be built
shell> make
# If everything is a success ...
shell> sudo make install

The newly built version of Epiphany can be launched by running /usr/sfw/epiphany/bin/epiphany.

Final Words

Epiphany with WebKit - The Acid3 Test

Things seem to be perfect. The newly built browser has passed 100% in the acid3 test unlike Firefox 3.0 which passed only 71%. Nevertheless, it has some glaring bugs which are really annoying at times. Here are some of them which might affect your daily web browsing:

  • FTP URLs don't seem to work, at least for me.
  • View source doesn't work.
  • Sometimes with page scrolling you might notice video/image content getting clipped.
  • Extensions don't seem to work as of yet.

But I'm using Epiphant with WebKit for my daily browsing. I'm happy with it otherwise :)


Hello, world
mugshot, larkspur
g13n

This content has been updated and moved to a new place.

After several years, I'm finally on LiveJournal. Its not a decision that I've taken hastily. I've tried many options, from choosing different blogging solutions like Wordpress to maintaining and running my own weblog (again on Wordpress) on a shared hosting, and more recently my own server (VDS). I have been doing this for years, but enough is enough, what I wanted is a simple system where I can voice my opinions, share my knowledge, write about my travel, and many other things.

Why shared hosting didn't excite me?

Running your applications and maintaining a weblog on a shared hosting is a pain, because you're stuck with the restrictions that your web hosting provider puts on you like running PHP as a CGI, and not installing your favorite extensions, using ".htaccess" for configuring Apache to some extent and so forth. Its a pain that you'd have to live with. I hated this, I wanted to configure my web server, and PHP and other libraries, so I chose for a VDS.

But for the kicks, VDS didn't last long too

After carefully investigating several options, I chose miniVDS. If you haven't had the experience with a VDS, its a completely different kind of experience. There are several options provided by miniVDS, out of which one of them is a barebones VDS, where you completely configure the operating system, web server, mail server and so forth. I liked this very much, because I had the luxury to configure and secure my web server like the way I want. Even I had the option to not compile unwanted modules/extensions into Apache or PHP. I even started writing my own weblog software since some of the popular blogging software are either bloat, or full of holes. I had designed my weblog software to be darn simple, each post was an RSS feed, and I used Lucene to index the posts and serve them. But, this too didn't last for long, I didn't have the luxury of time to do all these things forever. If you ever had run Gentoo (or even LinuxFromScratch), you would know what I'm talking about.

All is well, now :)

So, I was back to square one, looking for a good blogging provider broke my head upon many options including Typepad, Blogger, but finally I settled with LiveJournal. One, because it's open source, and two, because I know that all of my friends (yathin, vijayr, teemus, anomalizer, suhas, bluesmoon, code_martial, hitesh, ...) are here for years. I'm happy now :)


?

Log in