Saturday, December 13, 2008

UAW and the first qid pro quo for the Obama Administration

Let's call this bail out what it really is and stop insulting the electorate's intelligence. This is the first payment to the most powerful of Obama's supporters. The UAW had the unmitigated gall to send Obama a bill for services rendered. So now, we all get to sink billions into car companies that have not been competitive since the 1960's. Kleetus' message to the Big Three. You lost! You were beaten! Have some diginity and go do something else! But neigh, what are the car companies and their allies doing? If we can't beat Toyota and Honda by making cars people actually want to buy, we will just finance politicans' campaigns and send them a bill afterwards to get free money. We can spin this thing saying that "wouldn't it be terrible if millions that we employ lost their jobs"? Guess what? After you receive the bail out money, and you will, you will STILL go out of business, you dopes! If you couldn't build cars that people wanted to buy before in a good economy, what makes you think that you can suddenly be competitive with other car makers in this economy? Who is going to buy your cars now that you are on the brink of going under? Do you think that we will keep printing money for you all indefinitely? I would think not.
Let's just cut to the chase on this bail out and call it what it is; it is taking money from hard-working people in places not Michigan and sending it to Michigan and places under the thumb of the UAW. This is government highway robbery. This is the cost of the electorate not paying attention. This collective cost of not knowing sh** about your government is that "Tony Soprano" types in the unions and inept CEO's in Detroit get YOUR money. I think when Obama said "Change you can believe in", he was speaking DIRECTLY TO DETROIT! Translated less muddled, "Believe me guys, I will be sending you change, bags and bags of it". These guys are thieves and they have Obama in their pockets. Really they have the House and Senate leadership since Obama really has no power and will only be a puppet for the unions and trial lawyers. Maybe we ought to petition these groups for the next 4 years?

What is with that title?

Logic programming is the culmination of mathematics and its study of natural patterns. Prolog functional programming is the language in which to express these patterns and process them in the most powerfull mechanical computing machine possible; The Turing Machine. Rather, a physical machine that closely resembles Alan Turing's machine. The title of this blog represents a rule or a statement evaluating to true that can be processed unambiguously by a machine of my choosing. The things written here aim to be unambiguous and truthful.

Wednesday, October 8, 2008

Cell Annoyances: Simple, Clear, Direct, and Concise

Found this article "Peering Inside a Mobile Phone Network" written by Rich Mogull. It was very well written and cleared up some things that I did not know about the cell networks. I had the grave misconception that the cell networks are like IP networks; oh no sir. The truth is that they are much different. The article sort of takes the view that we (users of these networks) should think about cutting the operators and designers of these networks some slack. I say no way on that. Not our (users of these networks) fault that the cell networks are woefully inadequate to the task. Alas, the article described (from a very high level) what was taking place and it is worth a read even if you are very technical already.

Thursday, September 18, 2008

Python Code:: Sort by Key and Subkey in a List of Dictionaries

Here is a semi clear way to perform this function. There is probably a more elegant way to do this or even some snazzy library method, but this shows some of the guts of how this might be done. I am confident this can be rewritten to perform this is linear time or better. Again this is quick and dirty, like "Ah crap, I need this now and I can optimize later" type of thing:

def sortResults(self, unsort_rs, orderby, desc, subsortkey=None):
   list_rs = list(unsort_rs)
   result_rs = sorted(list_rs, key=itemgetter(orderby))
   if subsortkey:
     sub_list, complete_list = [], []
     for i in xrange(len(result_rs)):
          if result_rs[i+1][orderby] != result_rs[i][orderby]:
            complete_list += sorted(sub_list, key=itemgetter(subsortkey))
            sub_list = []
        except IndexError:
            complete_list += sorted(sub_list, key=itemgetter(subsortkey))
     result_rs = complete_list
   if desc == 1:
   sorted_rs = tuple(result_rs)
   return sorted_rs

Wednesday, August 20, 2008

Ipsecuritas, must allow ICMP, MAC OS X VPN

So we had this box nonstop pinging a server. This IP was private, so it was easy to tell it was from the VPN zone of the firewall, but I could not tell why someone was doing this. So I filtered this out and waited for the calls to come in. Well, the call came in about someone getting time outs from this server when pulling web traffic across the VPN. Surely, web traffic has nothing to do with ICMP that I filtered. Well, I would be WRONG. It turns out that Ipsecuritas VPN client uses a nonstop ping (once every 3 seconds) to a LAN host that it previously had traffic to in order to keep its tunnel open. Otherwise, IPsecuritas (client side) will tear the tunnel down. Game over for the VPN connection. Bunk! Shame on you Ipsecuritas (or Apple), are you so ghetto that you need to do this? It just seems so bush league. From a sysadmin point of view, you can't think of a better way to do this? How about a proper keep alive packet to the firewall?

Thursday, August 14, 2008

Sharing stuff

Ok, how can I be of service to someone else? How can I enrich someone else life? How can I ease the suffering of others? These are the questions that I ask in my life. In pursuit of this, I think about experiences that I have had. For some reason, I think it noble to impart any scraps of wisdom that I have picked up along the way. Maybe, I can impart wisdom in the fashion of, "hey I did this and it hurt, so be smart and don't do it, at least not like I did".

With all this in mind, I am going start a series of "Life on nuclear submarine", so that people can get a sense of the tribulations being a crew member. This was a time when I was constantly undergoing strife. Maybe lessons can be taught about not being a sailor, or maybe there are just lessons about being in close proximity to 100 other men.

Wednesday, August 13, 2008

Bikes for Christmas

So I started this charity project called "". I totally forgot to explain what that was on this forum. Maybe some marketing value will come my way. Here is the thing: It is just me giving bikes away for Christmas. That's it. You got a youngster that needs a new bike, just sign up and I will provide.

Get a bike!

Sunday, August 3, 2008

Bacula: Understanding its Pool Resources

I back up about 20 different data sets daily. Each set exists on a separate machine somewhere on our network. I struggled a bit to get Bacula to do what I wanted it to do. Maybe I was looking at the problem wrong, but I felt as though I should formulate my own back up strategies and work with Bacula's configuration to devise the tactics. It seems that you should strategize around Bacula's common usage to make it easier on yourself. Bunk! Bacula does the job, but you need to understand the configuration settings well to get Bacula to do what you really want it to.

First, the pool resource section of the bacula director configuration is a good place to start. This is the stuff I monkey with the most.

My strategy (not thinking about how Bacula does things). I want the following:

1. on the first Monday of the month I want a FULL backup of all data sets.
2. on the 3rd Monday, I want a differential backup everywhere
3. every night other than those two, I want incrementals done.
4. I want a pool for each machine (each data set).
5. I want a maximum of 2 volumes written for each data set
6. the data set will be a month of backups (FULL backup, incrementals, differential, incrementals)
7. once 2 volumes are written out in the above nature, I want the oldest volume recycled. This gives us 2 full months of backups, at best and 1 full month backup at worst, depending on where we are in the backup cycle.

To make this happen, you must do the following:

1. set your "Maximum Volumes" to 2 for all the pools (in pools resource section of bacula-dir.conf)
2. set your "Volume Use Duration" to 1 month.
3. set your "RecycleOldestVolume" to yes
4. set your "Recycle" to yes
5. set your "Purge Oldest Volume" to no
6. set your "AutoPrune" to yes
7. I recommend letting bacula auto-name your Volumes. I mean the pool has a descriptive name, so who cares what the volume is named? So set "LabelFormat" to "vol"

Then g'head and fire off your backups for a month. Then come back to the config and change one setting once your pools each have 2 volumes written in them. In this case 32 days after starting your backups. In bacula-dir.conf change the setting for "Purge Oldest Volume" to yes by doing the following:

1. edit your bacula-dir.conf file for each pool and restart the bacula director.
2. update each Volume in the already written volumes to honor this by issuing a "update volume" at the bconsole and changing this parameter for each volume.

Now you have a self-rotation backup schedule that will maintain itself, more of less.

Thursday, July 31, 2008

VirtualBox PXE Boot or VirtualBox PXE / TFTP

Just started using Sun's VirtualBox. VMware, Parallels ought to shaking in their closed sourced boots. It was easier to install and configure than Parallels and VMware and is FREE (as in beer). Sun even released a open source version. Parallels is such a bane running on Mac OS X, but VirtualBox seems to run smoother. I was even able to PXE boot a guest OS without using a floppy image like I have to do with my older version of Parallels.

A couple of things make VirtualBox different than other hypervisors. First, at least on the mac version of VirutalBox (VB), the networking differs in the following ways. This is in the manual too.

1. The network between the host OS (the OS that is native to the machine and that VB is installed on) and the guest OS (the OS that you are installing inside VB) is implemented in user mode. What this means is that VB does not reach into the host OS kernel and try to glob onto the network stack, Instead, VB just takes network traffic from its guest OS's and sends it the same way any other app on the host does. VB decided to do this using NAT by default. So VB just needs to present one logical "port" to the host OS for the network. So you can think of the VB hypervisor as a router, just like the one you may use at home to have multiple computers behind a cable/DSL modem. So the hypervisor has the ip of the host OS and it masquerades all its guest OS's as this ip. Pretty easy, makes sense, but definitely less convenient.

2. You can "bridge" the network connections to your guest OS's too. This is more common in paravirualization and virtualization. There are directions to do this in the manual. There appears to be no way to do this for VB for Mac OS X (using VB on Mac OS X host OS). Oh well, maybe in later versions. You can still port forward while you use NAT. I would not need this feature at this time anyway.

3. I installed VB on Mac OS X host OS and then PXE booted the debian etch installer and noted some things that can save you some troubleshooting time. First, you must make a TFTP directory and copy your debian installer files and pxelinux.0 image there. Also, and this tripped me up good, you CANNOT pxe boot into an image that has a space in it. For example, I named my VB image "warehouse image". So when pxelinux tried to go and find its configuration (by default: pxelinux.cfg/), VB passed it a directory prefix of "warehouse" WITHOUT the 'image" part. Bad news. I renamed the image to just "warehouse" and it worked. So here is the list of what to do to make VB PXE boot.

a. create an image in VB (be sure not to use white space in image name, use foo and NOT foo bar)
b. configure the boot options to boot from network
c. create a new directory called TFTP in ~/Library/VirtualBox/
d. on your mac, rename "pxelinux.0" to foo.pxe and copy it to ~/Library/VirtualBox/TFTP/
e. copy the entire directories of pxelinux.cfg and debian-installer to ~/Library/VirutalBox/TFTP/
f. the pxe system can be obtained from debian in a netboot file called netboot.tar.gz (this contains all the files you need to pxe boot into a debian installer).

A couple of things I noted AFTER debian was installed and I reboot into it. I reboot after everything was installed and since it was set to network boot before, it tried to network boot again. This time, it got an IP from the internal VB dhcp server (yes, the hypervisor runs an internal dhcp server), it then found the tftp (next server) and boot attempted to find a configuration inside pxelinux.cfg but COULD NOT because "TFTP server does not support the tsize option". OK so I let it sit and about 10 minutes later, it booted anyway! I have seen this tsize error before and it is well documented in the pxelinux FAQ's. They recommend using a TFTP server that supports the tsize option such as "tftpd-hpa", but why the VB tftp server worked before, but not after the debian install is a mystery. Installing an OS should not have anything to do with this tftp server.

Tuesday, June 24, 2008

Ruby -- How to turn a csv file into a list of has

There are convenience libraries for this type of thing, but I thought I would plunk down my solution.
Say you have a csv string and you need to just turn it into a proper Ruby data structure. A list of hashes would be nice since this is the same type of data structure you typically work with when you query a DBMS. Here is a function to do this.

def turn_csv_into_list_of_hashes(string)
  returned_list = []
  #the first line should be header
  rows = string.split("\n")
  header = rows.shift.split(',')
  rows.each do |row|
    row_hash = {}
    row.split(",").each_with_index { |item, i| row_hash[header[i]] = item }
    returned_list << row_hash
  return returned_list

Thursday, June 12, 2008

Erlang movie and other cultural turning points

Not sure if you've seen it. The Erlang movie shatters all notions of what human beings can accomplish with a bit of film, some astounding acting, and a script that descended right out of heaven. There are two time periods in human evolution, pre and post Erlang movie.

Tuesday, June 3, 2008

How environment variables really work (in POSIX systems)

Environment variables are strange animals, straddling the system (not kernel) and the application world. I will use the analogy from the movie "The Matrix". If you have not seen "The Matrix", stop reading this and run to a video procurement establishment and get it. In the Matrix, Morpheus shows Neo "the construct". This was a blank space or "environment" from which to load anything they needed. From huge racks of guns to grenades to cool leather jackets and sunglasses.
From that environment, they could have all the tools they needed to take on the agents. The supplies would always be there in the no matter where they went inside the matrix. This an excellent example of what environment variables really are. They are containers or "racks" that hold stuff or "information" so that actors (like the real actors) can use when inside the environment (matrix).
In POSIX systems like Linux, there are some rules that we must know about to know where and when environment variables are loaded.
Environment variables cannot just hang out without "being hosted" by another application. In Linux, there is ALWAYS a hierarchy of processes (applications, if you like). The Linux kernel is the mac daddy "process", but it is not a process per se because it runs entirely in a space that humans cannot access. I brought up the kernel as process because it really does run on the computer and launches the real first usable process called: init. Init is always process number 1 and ends up spawning all the other processes in the system such as X11, sshd, everything that runs in userspace. If you are a knowledgeable in Linux topics, you may have noticed some holes in the above, but for beginners, this explanation will give a decent primer.
Generally, applications in the Linux world know about the concept of environment variables and can make use of them, but the init process is different. It does not know about environment variables (because essentially there is no environment when it starts, except from the kernel). On the other hand init does know about arguments that are sent in before the kernel is booted. If you launch the kernel such as: vmlinuz foo=bar, the kernel will boot, examine the key foo, discover foo means nothing to the kernel and send foo=bar to the init process.
So init is special because it has no traditional environment. When init "spawns" its processes as dictated by init scripts or configuration in inittab, those processes will launch what is known as an interactive or non-interactive shell. This is where, I think, people get confused.

Now, the below information is sort of BASH concentric. This is the primary shell these days for Linux/Unix type systems. Lots and lots of people use other shells too, but if you are reading this, you probably are not interested in learning other shells other than BASH. The concepts for the other shells are surprisingly similar, but read slightly different files.

There are only two types of shells that you can have in a Unix-type system: interactive and non-interactive. The difference between interactive and non-interactive shells is that interactive shells involve a human or a process that needs the same things as a human would need when working inside a bash shell, for example. In other words, interactive shells require that something interact with the shell directly and not simply fork off and do its own thing.

Interactive shells:

To further confuse the issue, there are two types of interactive shells, login and non-login. Now pay attention, this is the good part. The files that interactive login shells read and interactive non-login shells read are DIFFERENT. This is why you should care to read this section. Let's look at the files that login shells read:


These files are read in that order. Other files can be read, but they will be referenced in the above files. Now long timers might exclaim:
"But what about if you run bash with the command 'sh' and use --norc option."
To this I say fooey. We aren't remaking the bash man page here.

For non-login shells (we are still interactive here) these files are read:


You might be asking when an interactive non-login shell might be used. Well, that is an excellent question. Non-login shells are used when you already are logged into a shell or even X and you need another interactive shell from which to launch scripts or issue commands. The system assumes that you do not need to reload /etc/profile or the above list because it loaded when you logged in! Now you might ask, so what if I changed something in my /etc/profile, but now I need the new variable and it is not there? This is because when you launched the shell, it was a interactive *non-login* shell. So now you are thinking "Christ, why so complicated?" No answer for that one, but you are singing to the choir, brother/sister. There are two things you can do for this.
1. The application you are starting may have an arg that allows a login shell startup....xterm does by launching, xterm -ls
2. Just throw your environment variables in an rc file such as ~/.bashrc and call it a day.
For number 2, I am sure shell purists are just ready to shoot me, but you know what? The whole thing is overly complex and kind of silly, so my theory is understand how the system works and then make it work the way you think it should and tell people why your system is better. If your way is not better, than people will fill you in as too why.

Non-interactive shells

A non-interactive shell would be for system "users", such as a web server, mail server, or a cron daemon. We like to put these users of the system into a shell environment where they can have just as much access to the system as they need to do their jobs and no more. They do not get to read /etc/profile because:
1. There is stuff in there about a human's environment (where the games are, possibly) and is of no consequence to them.
2. We do not want them knowing too much. If a cracker were to compromise that account. We do not want too much info there in that construct.
When you think of non-interactive shells, think of launching a shell script or running a PERL/Ruby/Python script from the shell. Also, you can think of your rc scripts that run when the system is booted. Those scripts still need an environment to work from, but should not be given the same environment as your bash prompt. You can also have non-interactive login and non-interactive non-login scripts. Wow this is confusing. The difference in the non-interactive context is that the login version simply looks to read:


and the non-login version looks for a $BASH_ENV environment variable and attempts to source this file. The $BASH_ENV variable must have the full path because there is no $PATH in this environment, yet.

Bottom line for this post is that environment variables can be absolutely maddening. If you understand the constraints of the types of environments that you can have (interactive, non-interactive), then this goes a ways in figuring out where you variables will be loaded from and when. To help yourself keep all of this straight, I recommend one of two things.

1. make a cheat sheet for yourself.
2. memorize this info (or at least some of it) by taking an hour and experimenting with your shell.

The way shells and environments were laid out is very difficult to keep straight, but if you do not have the gumption to redesign it all, then I hope this post helps.

Sunday, June 1, 2008

Rails Conf -- Impressions

Really smart people, but unfortunately most were reinventing the wheel. Some knew they were creating things already available, but didn't care; others toiled needlessly. Obie's talk was clearly the best by pointing out that using your abstractions properly is clearly something to be valued.

Ruby VM's are not interesting. At least not to me. If you want a VM like the JVM, just use the JVM, you will be much happier in the end and maybe get to enjoy life more. Creating a faster/multi-threaded VM is a good learning experience, but does not mean much even in the short term.
Ruby does nothing for software safety than any other imperative computer codes. Although Ruby "makes programmers happy", this does not mean a hill of beans in improving our customers lives. If happy programmer == well-tested code that meets the specs, then great. But as Obie Fernandez points out, this is not frequently the case. Living the 80/20 rule through a world full of broken code stinks. I really like some of the research going on to allow Ruby to make applications that more concurrent, fault tolerant, and still be, well, Ruby. I hope some of these things make it into Rails Conf next year. 

Friday, May 30, 2008

Rails Conf -- Joel Spolsky

I really liked Joel's keynote at Rails Conf 2008. I have always read Joel on Software and thought he was a kool aid drinking Microsoft guy, but he did not come off as such. He was thoughtful and practiced. More later.

Saturday, May 17, 2008

Apple "regedit"/ changing default settings

There was not much on google for this. I recently clicked the "Use Settings as Defaults" in the "Window Settings" in for Mac OS X 10.4.11. This screwed me over because I was secure shelled into a remote machine at the time. What this did was save the last execution string in the Mac OS X defaults settings. So then every time I launched terminal, it would ssh into that last server! Bunk.

Well, I did not know about the whole "defaults" command in Mac OS X, I was looking for some config file in my home directory to no avail. is a system binary that squirrels away its settings in the internal os database. To fix this issue, you must access this internal database using the "defaults" process. Here is the exact syntax:

defaults read | less <------in less: hit '/' and type in: ssh -l, once you find the "section" label
defaults write com.Apple.Terminal ExecutionString ""

now we have knocked this problem out. This will also allow you to alter all kinds of things in your os.

Wednesday, April 23, 2008

How to take a bootable CDROM image and allow a PXE client to boot from it

This took me awhile to figure this out. I had 30 clients that booted from a cdrom (they did not have an internal hard drive). They all ran the same cdrom image. When my company needed to expand, I thought it best to choose a thin client with no moving parts (this is a dusty dirty environment where the acidic dust wears bearings out!). The new thin clients have no hdd or cdrom, they must boot from the network. This cdrom image is proven over time and I wanted to continue to use it. Here is what I did:
Considerations: This is a linux/GNU environment, other OS are not considered.
1. Created a server to boot from (PXE server, DHCP server, NFS server)
2. Made a copy of the cdrom and mounted the copy, such as: # mount -t iso9660 /dev/cdrom /mnt/cdrom
3. Ensure /mnt/cdrom is readable by the world, such as: # chmod -R 754 /mnt/cdrom
TFTP/PXE setup
1. You will need to ensure that you have an initrd that will "do the right thing" when the linux kernel that boots from PXE. Here are the steps to manually recreate a initrd, this is not too hard:
a. Mount your initrd someplace, such as: # mount -o loop /boot/initrd /tmp/initrd
b. This image is read-only so you will need to copy it out and then edit it, such as: # tar cpvf initrd.tar /tmp/initrd && tar xvf initrd.tar /tmp/initrd_new
c. So now edit the 'linuxrc' or whatever your initrd uses as a 'init' script (the script that gets everything going).
d. when you edit this linuxrc, you will need to nfs mount the location that contains the files from the cdrom you mounted, such as: # nfsmount x.x.x.x:/mnt/cdrom
e. now create your new initrd image for use to pxe boot, such as: #mkfs.initrd initrd_new initrd
f. now put this initrd into where your pxe server expects it to be
Reboot your computer into the same image that is tested.

Thursday, April 17, 2008

Deleting all messages in an Exim4 queue

All that is needed in this line is to substitute a pattern to find the messages you want:

mailq | sed -n '/\*\*\*/p' | awk '{print $3}' | xargs exim -Mrm

Wednesday, March 26, 2008

Python: How to find a list of keys for a given value in a dictionary

Here is a straight-forward, easy to understand way to find a list of keys in a dictionary for a given value (that may or may not be in the dictionary)

def GetListOfKeysForGivenValueInDict(dictionary,value):
  keys = []
  for k,v in dictionary.items():
    if v == value:
  return keys


dictionary = {1:2,3:4,5:4}
value = 4
keys_list = GetListOfKeysForGivenValueInDict(dictionary, value)
print keys_list