Thursday 17 November 2011

vSphere hosts disconnecting

I recently reinstalled by entire vSphere & Veeam infrastructure (old install was not playing nice with a large SQL database - too many big jobs running overnight at the same time). Overall it all went really smoothly - point the new vSphere at the ESXi hosts and it warns that the old vSphere will be disconnected, Veeam installed and connected with no problems (or so I thought). Unfortunately I tested a Veeam backup before I enabled the Windows Firewall which lead to everything failing on the first night. Windows firewall needs a few ports opened for vSphere to remain connected to the ESXi hosts. After a few minutes all the hosts show as (not responding) in the vSphere client.



A full list of the ports for VMWare products can be found here http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1012382

However if you are just wanting to connect your vSphere server to your ESXi hosts then these are the ports that need to be open on the vSphere server (assuming you have installed on Windows and not used the new vSphere appliance)

For vSphere/ESXi 4.x you need the following:
TCP port 111 NFS Client - RPC Portmapper (NFS needed for Veeam's instant recovery feature)
TCP port 2049 NFS Client
TCP and UDP port 902 Heartbeat (This is the important one!)

vSphere/ESXi 5.x also needs these (but I have not tested as still on 4.x until Veeam adds full support)
TCP port 5989 CIM XML transactions
UDP port 111 NFS Client - RPC Portmapper (NFS needed for Veeam's instant recovery feature)
UDP port 2049 NFS Client
As soon as you enable the two heartbeat protocols your hosts should automatically reconnect and then everything should work. Lesson learned - wait for timeouts after enabling firewall before assuming its all working!

Friday 28 October 2011

Setting the window title in Putty

There are two ways to set the window title for a Putty ssh client. Statically and on the fly. Both have their uses depending on what you want to achieve.

Static window title
Statically is easy - and is best for naming a connection eg "Web server" so you know where the connection goes. There are two settings you need to change in the putty configuration screens.

1. Under Terminal -> Features tick to disable remote window title changing (to stop the remote server changing what you set)
Disable remote updates to window title
 2. Under Window -> Behaviour enter the new title you would like to see for that connection.

3. Your putty window will now have a fixed title (assuming you remember to save the setting - if not it will only last for this session)
Dynamic window title

Sometimes though you want to change the window title on the fly - or set it from the server rather than set it manually on each clients computer. In this case the following command will let you change the window title from within the Linux shell:

export PROMPT_COMMAND="echo -ne '\033]0;Dev Server - Compiling \007'"
The most obvious way to use this is to type it straight into the shell (or write a script to do it for you) which is what I did in the screenshot below. The other option is to add it to .bashrc which will allow you to pre set the window title for any users you wish or include it in any scripts that take a long time to run so you can glance at the window title on your task bar to see how far along it is.




Wednesday 27 July 2011

Quickly query a registry key

Sometimes you want to know quickly what a registry key is set to on a Windows machine.  Maybe you want to see what the windows update settings are because you are trying to deploy wsus.  Maybe you have just applied a security setting with GPO on a domain

From the command line you can quickly query the settings in the registry like this:
c:\>Reg query "HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate" /s

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate
    ElevateNonAdmins    REG_DWORD    0x1
    DoNotAllowSP    REG_DWORD    0x1

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU
    NoAutoUpdate    REG_DWORD    0x0
    AUOptions    REG_DWORD    0x3
    ScheduledInstallDay    REG_DWORD    0x5
    ScheduledInstallTime    REG_DWORD    0xb
    NoAutoRebootWithLoggedOnUsers    REG_DWORD    0x1
    RescheduleWaitTimeEnabled    REG_DWORD    0x1
    RescheduleWaitTime    REG_DWORD    0x5
    RebootRelaunchTimeoutEnabled    REG_DWORD    0x1
    RebootRelaunchTimeout    REG_DWORD    0x1e
c:\>
The /s switch shows all sub keys.  Note the key values are dispalyed in hexadecimal so ScheduledInstallDay of 0x5 is day 5, starting counting from sunday as 1 so Thursday.  ScheduledInstallTime of 0xb is 11 in decimal so 11am.


The quickest way I know to convert hexadecimal to decimal if you are not sure is to use google - a search for "0xb as decimal" in google shows the answer of 11 above the other search results.

Tuesday 26 July 2011

Recycle bins - why so many and how to clear them

Each user has a separate recycle bin for each partition on a computer.  This is great as it means is Bob empties his recycle bin then Alice still has her files in her recycle bin.  However it can be a pain if you are trying to clear space on a hard drive as you can empty the recycle bin on a multi user machine and there are still files taking up space in the recycle bin...

You can control the space available to the recycle bin by right clicking on it and choosing properties.  Sometimes you want to keep a big recycle bin (working with large files) but also try to reduce space used by the bin where possible (Virtual servers with backups of whole VM's, obsessive compulsive sysadmin...).  Each user has their own recycle bin which is great for accountability but the name of each bin is not easy to decipher.

For NTFS partitions the recycle bin is stored in c:\recycler (Note: you have a recycler folder on every partition - for simplicity I'm just going to use c:\recycler in my examples but if you have a D drive you will also have d:\recycler etc...).  This is a system folder so to view it you need to make system files visible (Tools -> System folders -> view -> untick "Hide protected operating system files").  In this folder there will be several folders with long complex names (eg "S-1-8-13-2457591-767657898-480356960-2981" )  This long string of characters is the SID (Security Identifier) which uniquely identifies each user on a computer/domain for security.  You can right click and choose properties on these folders and see the size but if you try to go into a folder that is not yours it appears empty even if full of files.


To determine the user name associated with a SID you can type the following at a command prompt (Replace this sample SID with the one you have of course...):
wmic path win32_useraccount where sid="S-1-8-13-2457591-767657898-480356960-2981" get name
Or if you know the name of a user you can find their SID with this command:
wmic path win32_useraccount where name="bsmith" get sid
Now we can track down who has the big recycle bin so if you have ongoing problems at least you know who to chase.

If you are sure you want to delete a users recycle bin then as long as they are logged out you can just delete their folder under c:\recycler - if they are logged in this file is in use in which case you will have to ask them to do it.

Sunday 24 July 2011

Bitcoin - How and when to backup wallet.dat

Bitcoin is a fascinating subject both from the technical and social viewpoints.  While there is a lot of people online talking about how to use it and whether you should use it, there seems to be far fewer people talking about how the wallet works from a backup perspective.  This seems strange as your wallet contains your money and if you do not understand how it works you might loose your money.

If you don't care about the WHY and just want the answer jump to the bottom of this post.


The first thing to understand is that your wallet does not contain the actual money.  Your wallet contains a lot of public/private key pairs (I'm assuming you understand the basics of public/private key cryptography - if not I'll be writing something up on that soon inspired by the latest portforwardpodcast).  The blockchain contains the list of who has which bitcoins.

Every time you are sent money it is sent to one of your public keys and only your private key is able to send that money on again.  As your bitcoin client downloads each new block it compares it with the keys in your wallet and adds or subtracts the amounts sent to/from your keys.

This is part of the security of Bitcoin as if someone tries to send you bitcoins your client can see if they actually have those bitcoins to send by looking back through the blockchain.  If everything is correct then the transaction will be included in the next block generated which is the first confirmation.  By the time you have 6 confirmations you can be pretty sure the money is yours.

The public/private keys can become used in two ways so your wallet.dat file contains 100 spare keys for future use (you can change this default to anything you want).  A new public/private key pair is automatically generated each time you send or receive bitcoins to help improve the anonymity of bitcoin.  Each new key pair is added to the end of the list and every time a new key pair is needed the oldest spare one is used.

You can see this happening when you receive bitcoins - the client shows your current bitcoin address on its main window.  When you receive some bitcoins a new address from the list of spares will become active and a new address will be generated and added to the end of the spare list.  Since your wallet still contains the old address you can still receive funds sent to it without any problems.


When you send bitcoins a key pair can also be used.  This is because if you have received 10 bitcoins to a single address and you want to send 2 bitcoins the transaction sends all 10.  2 bitcoins go to the person you are paying and the remaining 8 bitcoins are sent back to a new one of your own addresses.  This is all hidden by the client so you do not see these keys.

You can see this happening at http://bitcoincharts.com/bitcoin/ which shows the latest unconfirmed transactions - almost every transaction has 2 outputs.  One will be the money going to its new owner and the other will be the remainder of the money going back to its existing owner on a new address.

You can backup your wallet.dat file, complete several transactions and keep using the new addresses to receive money with no problems.  At some point, probably about 100 transactions (sending and receiving) you will start using key pairs that are not in your backed up wallet.dat file.  If you then loose your wallet you could loose a lot of your bitcoins as the backup will not have valid key pairs to access the money.

What this means for your backup of wallet.dat:

So taking all this into account you should backup your wallet.dat file regularly but it is not necessary to backup after every single transaction.  Your wallet already knows the encryption keys it will use for the next 100 transactions.  Speaking of encryption please encrypt your wallet!  A future bitcoin client is expected to include encryption which should improve things but until then its your money - look after it!

On Linux your wallet.dat file is stored in ~/.bitcoin/

On Windows the location of your wallet depends on your windows version.  For Windows7 (and Vista if you are still suffering that OS) it is in C:\Users\<your user name>\Appdata\Roaming\BitCoin.  For older versions of Windows (eg XP, 2003 etc) it is in C:\Documents and Settings\<your user name>\Application data\BitCoin.

Just close the bitcoin client and copy this file somewhere safe (both "safe if my computer dies" and "Safe from anyone that wants to steal my money"!)  The only file you need to backup is the wallet.dat file - all the other files will be re-created/downloaded if you start the bitcoin client with the wallet.dat file in the bitcoin data directory.

Disclaimer: Everything here is based on my understanding of how things work - I am in no way connected with bitcoin apart from being a user.  The main bitcoin client is still at version 0.3.24 - it is a relatively new program that is likely to change as new issues with the growing bitcoin network emerge and new features are added. 

Friday 15 July 2011

Yubikey Radius on Premise V3 setup guide

Yubico have just released version 3 of their excellent Radius on premise virtual appliance for authenticating users with a Yubikey as a dual factor authentication.  It now supports multiple Yubico validation servers so is more resilient, has better user management and logging as well as being available in OVF format which should make importing into virtual environments less VMWare specific.

This is a quick guide to how I got it working for my (SBS2003 & Netscreen SSG 140 firewall) VPN's.

Release notes and download links for the appliance and guide can be found here: http://wiki.yubico.com/wiki/index.php/YubiRADIUS_Virtual_Appliance_version_3.0.  As usual with Yubico the guides are clear and take you through the setup step by step.


I downloaded the VMWare image file, extracted and then imported into VMWare using the VMWare Convertor program available from VMWare for importing backups.  Its a very small VM - 8GB hdd, 256MB Ram and 1 virtual CPU.  Imported to VMWare in 3 minutes.

When run the appliance gets an IP address via DHCP so I set this to static by logging on to the console of the VM (user yubikey, pass yubico) then opening a terminal and editing the /etc/network/interfaces file (root default password is also yubico).  Rebooted the machine and logged in from my desktop to the web interface, changed default passwords, logged in again to check.  I tried changing the IP from within Webmin at first but it did not seem to save correctly.

Now it is time to start setting up the Yubikey magic.  There is a seperate Webmin module for Yubikey under the System heading.  Enter your (authentication) domain name and click add.  Under global configuration I entered my API ID and key (generate yours here: https://upgrade.yubico.com/getapikey/). You can also enable auto provisioning (so the first time a Yubikey is used its permanently assigned to that users account.  I don;t like doing this as when I import users I get the whole domain where I really only need a handful of people to be able to connect remotely.  If you only have a few users or want all of your users to be able to connect its a very nice feature to have.

Back on the domain tab I clicked on my domain name then users import.  This is where I had trouble when I set up the previous version of ROP as unless you use it a lot the LDAP syntax can be a bit confusing.
Host: IP address of domain controller
Port: should be 389
LDAP: change to version 3
BaseDN: Where are your users stored in active directory?  Mine looks like this: "OU=SBSUsers,OU=users,OU=MyBusiness,DC=mydomain,DC=com"
UserDN: User with access to connect to active directory (your user account?)  Mine looks like this: "CN=My Name,OU=local,OU=SBSUsers,OU=users,OU=MyBusiness,DC=mydomain,DC=com" - all users in this folder and all subfolders will be imported.
Password: password of the user account specified above
Schedule I left blank (one time import so it will not auto update usernames)
Filter: "(objectClass=person)" - only pull through user accounts
Notes: I left blank
Login Name Identifier: "sAMAccountName"

Hit "Import Users" and you should get a message reporting success.  Go back to the users tab and you should have a long list of peoples accounts from Active Directory.  From here you can manually add a yubikey to users accounts.

Next I need to tell the ROP server to accept queries from my firewall.  This is done on the configuration tab for the domain and just needs the firewalls IP address and shared secret.  One very nice new feature of ROP3 is the RadTest tab which lets you test an account is working.  Getting failures on the firewall can be a pain because its not always obvious where the issue lies - this lets you eliminate the ROP side of things easily and quickly as if it works here it must be the firewall with issues.  However there is one point that is important:   IN ORDER TO USE THE BUILT IN RadTest TAB YOU FIRST HAVE TO ADD THE LOCAL 127.0.0.1 IP TO THE LIST OF ALLOWED CLIENTS UNDER THE DOMAIN->CONFIGURATION TAB!  I lost some hair trying to work out why it was not working - I assumed that the new feature to test if things were working would be allowed through by default.  My bad - very obvious mistake of mine with hindsight!  If the RadTest seems to do nothing keep waiting - its not too obvious it is doing something if it is hitting a timeout.

Once tested you should be able to point your systems at its IP address (Radius port 1812, accounting port 1813) and start authenticating your domain users with Yubikeys.  At some point I'll document how I setup the Netscreen SSG 140 side of this to give my users dual factor VPN access to my network.  If that would be handy to anyone drop me a comment and I'll move it up my list of things to post.

[UPDATE 12 March 2012: Done! Click here for my guide on setting up a dual factor VPN on Netscreen firewalls]

Monday 11 July 2011

Connect to a network device without knowing its IP address

A handy tip from Linux Format (LXF147).  If you have an appliance which is turned on but you do not know its IP address you can still connect over the network using the arp table.

On pretty much any appliance there will be a sticker with model number, version etc.  Most appliances will also show their mac address on these stickers and you can use this to connect over the network.
To show the current arp table which might already contain the device run
sudo arp -n
If its not shown then you can assign a temporary IP address for use from your computer with:
sudo arp -s <IP ADdress> <MAC address>

This is a lot quicker than starting something like nmap and scanning whole network ranges to try and spot the IP address you need.  With this you should be able to connect to that new network storage device you just plugged in and browse its admin interface to discover and/or set its propper IP address.

Thursday 7 July 2011

PortForwardPodcast VMWare Notes

Recently I was invited on to the excellent PortForwardPodcast to talk about VMWare and how it works for small businesses.  Please check out their site for several other excellent podcasts.  I've copied my notes for the show below so if you came here following that podcast you might find a bit more detail below.  Or check my other posts on this site as I have made notes several times on VMWare as I have been learning to use it and deploying it.

Friday 1 July 2011

Veeam VMWare network restore speed

Whilst most of VMWare is very fast and efficient copying files in and out of the datastore is depressingly slow - you are lucky to get 10 MB/s even with fast hardware and gigabit network, I normally get closer to 5 MB/s.  There is a trick you can use with Veeam restores over the network to vastly improve this speed, something similar should work if you are using shared storage but I can't test this myself.

The problem is basically the slow network performance when copying files to or from the datastore over the network.  There are a lot of people talking about this online and it has been an issue with VMWare since at least version 3.5 (4.1 made some small improvements).  At 5MB/s a large VM  can take ages to copy - especially as it transfers the full image over the network.  So if you have a VM with a "thin" hard disk format, 200 GB in size but with only 10GB of data it will take 10GB on the datastore.  When you try to copy it to your desktop it will transfer 200GB over the network.  The code that deals with migrating datastores, storage vMotion etc is all really fast so people with higher end setups will not use this feature much.  Also if you have a SAN you can just upload direct to the SAN then connect it to VMWare - not possible if you are using local storage...


This faster code is the way to get big VM's onto ESXi quickly using Veeam.  First you need to do an "Instant Recovery".  This sets up the VM on the destination ESXi host but also shares a folder from the Veeam server containing the backed up image.  Restoring this instant recovery takes seconds regardless of the size of the VM.  However the files themselves are still stored on your backup server.  If you were to turn on this instantly recovered VM it would be running from files over the network - probably a bit slow and not what you really want to do long term.  However if instead you right click on the VM in vSphere you can choose to migrate - follow the wizard and tell it to change the datastore.  Moving the files from the temporary backup datastore to your main datastore like this runs as fast as your hardware supports - probably at least 20MB/s even on fairly normal servers as long as you have GB network cards in use.

Hopefully VMWare will fix this soon as it takes a bit of googling to find out why restores take so long with Veeam and find  a workaround - I'm sure many people just give up.  This same problem makes backing up a VM from the free ESXi hypervisor a very slow process, which might be putting people off from buying the program.

Tuesday 7 June 2011

RSA security tokens compromised

Recently there have been some problems with RSA security fobs reported in the media like this: http://www.bbc.co.uk/news/technology-13681566

Basically back in March RSA (the guys that make most of those secure number generator key fobs for online banking etc) were hacked (you might remember me mentioning it at the time if you know me).  RSA were taking the line that it was not a big deal as they did not get enough information to actually compromise anything but would not say what was taken "for the security of our customers".  Unfortunately several major North American military companies have now been hacked/had attack attempts (new news - I'm sure it will become a bit clearer over the next week or so) due to compromised RSA tokens (Lockheed Martin, L-3 Communications and Northrop Grumman for example).

It currently looks like the intent behind the original hack was to target American military tech and RSA are now taking the line that nobody else needs to worry much as the original hackers are not interested in them.  This of course is debatable (with a confirmed exploit how long until people who ARE interested in bank accounts obtain a copy?).  RSA are offering to replace all RSA tokens for customers with concentrated user bases (eg they are happy to send thousands of replacement fobs to a company but will not send those thousands of fobs directly to the end users.  (Letter from the RSA Chairman here: http://www.rsa.com/node.aspx?id=3891).  That is an interesting read for diversionary tactics.  He mentions that there have been lots of high profile atacks on other companies that were nothing to do with the RSA problems.  He states that Lockheed Martin have denied that the attack succeeded (they would!) but obviously can't say that it has not succeeded somewhere else.  He says that the attack is not a new threat to RSA SecurID (correct - its from the hack they announced back in March and down played.  Now we just know that the information stolen in that hack can and has been used in an attempted (maybe successful, maybe not) hack on top US military companies).

For end users (you and me) this means two things:
1. At the moment if you use RSA key fobs you are currently probably safe but it may not stay this way
2. The only people who can replace your Fobs are the people that gave them to you (your bank for example) and they are probably not going to be too keen to do anything as shipping new fobs to all their customers and ensuring the right person gets the right fob is going to cost them a lot. (also RSA are offering financial protection insurance so the banks will not loose anything if anyone is compromised)

I get the feeling I've heard this story a few times before (warning, massive generalisation ahead).  Company makes good product and grows as people use it.  As the company employs more people there is a greater chance that one of them will make a mistake/leave their laptop on a train/get greedy and data is lost.  The more popular the product the bigger the companies likely to be effected when the brown stuff hits the fan.  Media run a few stories, company tries to down play the issue and most people keep using the service anyway.  I'm sure most PS3 owners are going to keep buying and playing games even after Sony lost millions of their account details with passwords, addresses, credit card numbers etc included.

You could start using something like the excellent Yubikey (http://www.yubico.com/yubikey) host your own dual factor authentication server and generate your own keys so nobody else has a copy.  That way even if your provider was compromised your keys would not be exposed.  There is still the risk of an attacker or other person discovering an exploitable vulnerability in the encryption scheme used but this is fairly unlikely.  I'm sure many people who have deployed Yubikeys or similar devices will be quite happy reporting to their managers that they are immune to this attack.  Unfortunately for me I have purchased a batch of Yubikeys and got them working on our VPN but an issue with IP address allocation (http://forum.yubico.com/viewtopic.php?f=5&t=651) has stopped me deploying them so far - along with the usual list of more important projects as long as my arm and I'm therefore not able to gloat as much as I'd like.  Not that any internet connected system is 100% secure.

At the end of the day if you are reading this you are probably safe enough.  Use secure unique passwords, make sure you visit reputable sites, try not to put the answers to security questions on Facebook (what is your mothers maiden name...), use an up to date virus checker etc.  Even if someone can duplicate your RSA token they would also need your password, username etc.  If you are responsible for managing RSA keys then read through the guidelines that RSA have produced (http://www.rsa.com/products/securid/faqs/11370_CUSTOMER_FAQ_0311.pdf) to see if there is anything further you want to do.  If you are working for a top military company, bank or other place you think is at high risk because of this then go talk to your security people.  You do have them right?

Thursday 28 April 2011

Migrating a Win SBS 2003 server into VMware

Last weekend I had the joy of migrating our main domain controller (Windows Server 2003) from its own physical server to a virtual server running on VMWare (ESXi 4.1U1).  Overall it went very smoothly with just a few issues.  It was a long process though.  I started it Saturday mid afternoon, came in Sunday morning to find it had got confused as time on ESXi host was wrong (Windows needs activation NOW, no network, installation ID not displayed and MS tech support don't work on Sundays).  Fixed the time on host and restarted it.  Came in Monday (bank holiday luckily) and finished the install without further issues.  These are the steps I took:
  1. Firstly uninstall everything you don't need any more, delete all temporary files, empty the recycle bin and move any file shares off the machine for now (Transfer rates for the convertor were about 5MB/s on decent hardware with a gigabit switch - far faster to move files out, then convert, then move files back in)
  2. Defrag the hdd (generally a good idea and should make things faster)
  3. Turn the machine off and boot from a VMware Convertor CD
  4. At this point I blocked all emails at our firewall so that if I was not happy at any point I would not have any emails delivered to the wrong machine.
  5. Leave whatever networks it detects alone - far better to clean up unneeded interfaces after teh clone than to delete one that was actually in use
  6. For best speed do not change the size of partitions
  7. Unselect any recovery/maintenance partitions - they will be irrelevant ont eh virtual hardware anyway
  8. Tick install VMWare tools - its needed for lots of virtual hardware drivers so you want to install it early
  9. Run the convert
  10. Go home for the night (Seriously, my convert took about 17 hours)
  11. Before turning the machine back on check the hardware settings
  12. Change the SCSI card type from Bus Logic to LSI Logic - significantly faster
  13. Create a Snapshot
  14. Turn on the virtual machine, log in as the domain administrator user not just a member of the domain administrators group*, let it find drivers for everything it detects on boot, let it install VMWare tools and then when you are happy its finished reboot
  15. Repeat point 12 if required
  16. Open a command prompt window and enter these two lines:
  17. set DEVMGR_SHOW_NONPRESENT_DEVICES=1 devmgmt.msc
  18. In the device manager select "Show hidden devices" then go through and delete all the greyed out devices (yes all of them, there might be a LOT - it took me about 20 minutes).  They will not be used again but if you leave them there they will still load the drivers on boot and might have unintended consequences 
  19. Note that if you use static IP addresses then the missing network adaptor will still have it assigned - you need to remove the device (as described inpoint 16) before you will be able to assign the IP address to a VMware virtual network adaptor and get the network working.
  20. Reboot and check you are happy with all the file shares, test incomming and outgoing emails (after re-enabling emails through the firewall if you disabled it) etc
  21. Activate windows
Congratulations, you now have your server running in VMWare.  Hopefully none of your users will notice anything different (or will notice an improvement - but they never seem to mention that). 

    * Some things have gone wrong unless logged in as user "administrator" for example activation does not generate its identifier so it is not possible to activate.  Your experiences might differ but my gut feeling is that for fundamental setup the actual administrator has fewer problems than a member of the domain admins group.  For normal maintenance, windows updates, software install I just use my main account.

    Wednesday 27 April 2011

    So long TiddlyWiki

    I was using the excellent TiddlyWiki for this blog. I have actually stared to get my own site appearing in my searches quite a bit recently. Due to the way TiddlyWiki works (just a single HTML file with DIV's to show/hide posts) the search engines see it has specific key words but when you follow the link you get the intro page. I'm going to migrate all my old posts over to Blogger (manually - yay!) over the next few days.... Update 28th April: Finished!  45 posts moved over, several ignored or merged as they were not valuble enough to save.  I corrected a few typo's during the transfer however I left the content as it was written. 

    I'll keep using TiddlyWiki as a personal notebook for various projects, it is an excellent tool.  Unfortunately it does not quite cut it as a blog (but that was never its intention anyway).  Onwards and upwards - it will be interesting to see how many hits this site actually gets, something I could not track when using TiddlyWiki.  My main reason to write this blog is to keep a note of things I learn for myself, and explaining them helps me understand them better.  If anyone finds these random notes useful then its an added bonus.  If this site becomes popular I reserve the right to change my morals and plaster it with adverts...   :)

    Monday 18 April 2011

    VMware Tools install on Redhat / CentOS 5.5

    From ESXi 4.1 onwards VMWare is no longer providing an "install vmare tools" option that includes rpm installers for RedHat or dirivatives. The alternative they are pushing people towards is using the package management system built into the Linux distribution. This has several advantages (eg updates might actually get applied outside of server upgrades) but also involves jumping through a few more hoops to get it going and means you manually need to change the package source if upgradeing the ESXi host.

    These are the steps I took to get this working.

    Wednesday 23 March 2011

    Yubikey two factor auth for Netscreen Firewalls

    NOTE: I have posted an updated version of this document with more detail HERE to cover version 3 of Radius on Premise.

    I have also posted a thorough guide to setting up the Netscreen Firewall to use this type of authentication for VPN access HERE

    I'm investigating using Yubikeys for dual factor authentication of VPN users on our Netscreen firewall (SSG-140).

    Yubico make a very nice device for two factor auth called a Yubikey, a very small USB device as thin as a key which you can give to users to keep on their keyring (I've had one on my keyring for about a year now, hardly notice it and it has not got damaged at all). When they want to use it they plug it in and rest their finger on the sensor for a few seconds and it generates a one time password. I downloaded and imported the Radius on Premise VMWare image they provide (http://wiki.yubico.com/wiki/index.php/RADIUS_on_Premise-II), imported using the VMWare Convertor. The guide they provide on that page (link next to download link for VMWare image) gives a good walkthrough of the setup process, I only had a couple of issues:
    • When importing users remember to use the correct format - I'd forgotten that the CN for a user account should be the users name not username:
      CN=Peter Simpson,OU=SBSUsers,OU=users,OU=MyBusiness,DC=example,DC=com

    • When you actually try to log in or test with radtest the username is case sensitive - MS Active Directory stores the case but ignores it and half my user accounts are lowercase, the other half are mixed case.
    My current problem is that although the authentication works perfectly I'm not getting a IP address so the VPN is not going to work (http://forum.yubico.com/viewtopic.php?f=5&t=651). I guess the netscreen assumes any radius server will be able to provide IP addresses but the Yubico Radius on Premise appliance does not. Freeradius V2.0.4 (the version in ROP) does support DHCP but it is beta and requires recompiling from source. Luckily the vpn client I am using (Shrewsoft VPN) allows me to manually configure the VPN IP address, so I can get things working but the idea of manually setting an IP address for each user does not appeal. For now though it will work fine for testing, I just need to find a few test subjects...

    Thursday 24 February 2011

    VMWare Vsphere CLI install on Debian

    I just installed the VMWare vSphere Command Line Interface on Debian and had a headache because it kept reporting missing packages and RPM errors. BAsically the script looks for a file /etc/*-release and uses this to determine what type of system it is installing on. Since Debian does not have this file it falls back to assuming a RedHat based system and uses RPM to check for required dependencies. Not suprisingly this fails.

    The quickest way to get round this is to pretend the system is ubuntu and install, then delete the ubuntu file:
    # echo ubuntu > /etc/temp-release
    Install using ./vmware-install script
    # rm /etc/temp-release
    The dependencies I had to install were libssl-dev and perldoc

    Thursday 17 February 2011

    Learning IPv6 part 4 - Postfix email server

    Postfix is very simple to install and was also pretty simple to setup for IPv6 use.
    1. install the postfix and alpine programs (plus dependancies)
    2. edit /etc/postfix/main.cf
      1. check the mydestination field includes the domain you have setup
      2. Ensure the mynetworks field includes the IPv6 network you are using (so for me I needed to add "[2001:470:1f09:12e9::]/64" to the end of the line
      3. added a line "mydomain=test_ipv6_domain.com" above the mynetworks line
      4. Add "inet_protocols = ipv4, ipv6" to the end of the file otherwise Postfix currently defaults to only listening on the IPv4 addresses
    3. Add the relevant lines to the bind zone file
      1. @ IN MX 10 mail6.test_ipv6_domain.com
      2. mail6 IN AAAA 2001:470:1f09:12e9::123
    Allow mail through the firewall to the IPv6 address and it all worked smoothly.

    RDNS was a real pain until I realised I needed to register my DNS server with hurricane Electric on their tunnel setup page. Obvious when you think about it - the IP's are registered to HE, how are they to know what my DNS server is?

    A partly usefull site is http://www.mxtoolbox.com which lets you check what it sees your DNS entries as, however it does not support IPv6 so shows the IP address as 0.0.0.0 - at least you know that it is working that far though. :) You can also check other DNS records on this site (NS, A, SPF, txt, PTR, CNAME etc etc etc.)

    A better guide to Postfix setup which does not include IPv6 is here: http://www.linux.com/learn/tutorials/308917-install-and-configure-a-postfix-mail-server

    Wednesday 16 February 2011

    Learning IPv6 part 3 - Apache and DNS

    Another day, another IPv6 adventure. Debian works with IPv6 really easily, just need to add the details to /etc/network/interfaces as shown in Linux network address setup. I'm using manually configured IP addresses for now, at some point I'll look into DHCP for IPv6.

    That got me on the internet, next step was apache which it turns out was already done - I spent a while looking into config files but all I needed to do was restart apache with the IPv6 address setup and it picked it up automatically. To configure specific virtual hosts etc the syntax hasn't really changed, you just need to put the IP address in square brackets in the /etc/apache2/sites-available/* file. Oh and to browse by IP address you also need to use square brackets so in the web browser address bar you should type http://[2001:470:1f09:12e9::123] or whatever the address is for you.

    Next I need to create a fully authoritative DNS server and connect it to a domain so that it can be resolved from the net as 123-reg does not support IPv6 yet. This turned out to be very simple (although to be honest I just wanted to get it working, not optimise it, set internal/external views/etc so very basic) following the instructions here: http://www.cahilig.net/2008/07/04/how-setup-lan-dns-server-using-bind9-under-debian-and-ubuntu-linux. There is also a good followup guide for DDNS here: http://www.cahilig.net/2008/08/02/debian-and-ubuntu-ddns-bind9-and-dhcp but not much detail on IPv6, mainly IPv4. Securing the Bind instance with chroot is covered fairly well here: http://linux.justinhartman.com/DNS_Installation_and_Setup_using_BIND9 but I have not tried this yet.

    Set the DNS server for my test domain to be the laptop (123-reg actually insists on at least 2 DNS servers so I just put dns1.domain.com and dns2.domain.com, and gave the same IP address for both. Obviously not a good idea for a real system unless you like to live dangerously but for this test its fine - all my IPv6 stuff so far is running from a single old dell laptop and Netscreen 5GT firewall appliance.

    Allowed DNS (IPv4) and http (IPv6) through the firewall and outside hosts can browse the web server over IPv6. The DNS server is set to respond to queries over IPv6 too but since most things lookup DNS entries over IPv4 at the moment I've left it for now. I've also not yet explored the reverse zone file in IPv6 or DDNS updates linking to a DHCP service to automatically add new hosts to the zone files. To test that properly I might need to setup several virtual machines to play with how IPv6 addresses are allocated and have a seperate internal/external DNS server setup.

    This has got me thinking about our DNS setup. At the moment internal DNS is handled by our MS Small Business Server 2003 and external DNS by our Registra. With IPv6 it might make a bit more sense to handle this ourselves as we are likely to make changes more often initially but we would need several DNS instances to be reliably redundant. Say 2 internal DNS servers auto updated with new hosts as they are added by dhcp, and two external DNS servers which only contain the IP addresses of the hosts we want external people to be able to connect to. One of those external DNS servers would need to be off site though (otherwise if we get a power cut and someone tries to email us their main server will be unable to resolve the address and may bounce the message. If they can resolve the address but the mail server does not respond then most servers keep trying for a few days). Virtual machines would make this pretty easy and low cost except for the external server but I'm sure I can arrange a reciprical deal with someone I know for something as low bandwidth as a DNS server. Added benefit in that if our main SBS server dies now we loose DNS and therfore web access. I can easily change a DNS entry to our ISP manually but don't fancy doing that on dozens of computers while trying to fix a domain controller... It is still 4 new servers to keep an eye on which is more work.

    Monday 14 February 2011

    Learning IPv6 Part 2

    I get the feeling that IPv6 is going to be bit tricky at first.

    I sat down to try and get it going again and this time it all worked first time with no issues. Not sure what I was doing last time but I created a IPv6 tunnel with Hurricane Electric (free tunnel broker service at http://tunnelbroker.net), setup the tunnel in my old netscreen, added the routed subnet (despite them making the changed part of the IPv6 address bold I missed it the first time!), manually added some IPv6 addresses to the firewall and an old XP laptop and I was live on IPv6. No problems at all. Hit the button on the Hurricane Electric certification page to test and got promoted to Explorer from Newbie.

    With lots of confidence I read the next test, basically I need to host a website on IPv6 with DNS resolution. Quickly decided to go for a linux install on the laptop with apache as a decent challenge (last setup Apache about 5 years ago, never with IPv6) and started downloading the latest Debian install ISO (6.0 - Squeeze - just released a few days ago). I hope this will also give me a good foundation for any other challenges that come along. While downloading I went to our domain registra's page planning to setup the DNS entry and found that 123-reg.co.uk does not support IPv6. Wait, what?

    So our ISP (BTNet) does not yet support IPv6, neither does our "Fanatical support" web host Rackspace or our domain registra 123-reg.co.uk. The ISP I can work around with the free tunnel from Hurricane Electric and I think I can run my own DNS server but don't think 123-reg will let me make a separate server authoritative for a subdomain. Guess I'll be creating a ipv6testdomain.co.uk or similar to get this working for now, unless 123-reg have plans to support IPv6 soon. UPDATE: Yep, 123-reg have plans but no dates yet, guess I'll be learning how to set up a authoritative DNS server for IPv6 a bit sooner than expected.

    In the mean time Debian Squeeze has nearly finished installing so I'm off to set up IPv6 in Linux and then probably watch the ASCII star wars intro to check its working.

    Tuesday 1 February 2011

    Final IPv4 addresses allocated by IANA

    Well APNIC has finally requested the last two blocks it is entitled to taking the number of available blocks to 5. Since this is the same as the number of RIR's (Regional Internet Registries) the final distribution of blocks has now been put in motion bu IANA.

    https://www.apnic.net/publications/news/2011/delegation

    Not that the date this happens effects the end game much - generally we are still expecting to run out at the RIR level by the end of the year. APNIC expects to start limiting IP addresses in 3-6 months after which its new customers only and they only get a few addresses (Max 1024 regardless of how big the company is). What I find really frustrating about the whole situation is the lack of support from big ISP's. I can understand businesses not buying into it (hey the internet is still working right?) and the techies getting frustrated but for an ISP like BTnet to still not support IPv6 is getting farcical. End of Jan I tried again and got this response:

    BTnet does not currently support IPV6.
    IPv6 on BTnet
    BT is committed to the development and support of IPv6 across its networks and services including our UK and Global Internet platforms and our UK (BTnet) and Global (BTIA) Internet Access Services. This will ensure IP address space will continue to be available for all our customers as the IPv4 address space is forecast to become exhausted within the next few years. Upgrading our platforms to IPv6 will also allow BT to stay at the forefront of IP based services.
    Currently IPv6 is being trialled with some customers at a number of sites on our Global Internet platform. Plans for full introduction to the Global and UK platforms are being prepared with the expected hardware and system upgrades expected to start during 2011.
    No changes are expected to existing BTnet services. Customers will be kept informed when IPv6 capability is deployed on our platforms and any changes to their service options will be fully communicated at that time.
    So much for a business class ISP - we only pay for a 10Mb leased line on 100Mb bearer over fibre. Litterally months away from this becomming a problem and this is going to be a bottleneck if the economy does turn round. With APNIC due to run out first a lot of the companies in Asia will start to implement IPv6 and with its multicast abilities I'd expect lots of online systems (eg online games) to start prefering IPv6 reasonably soon.  They already uses P2P for things like updates quite a bit.  Ah well I'll just keep asking every few months. Should have my test IPv6 setup online in a few days, no point waiting for BT any longer.

    Thursday 27 January 2011

    Learning IPv6 Part 1

    Still thinking a lot about IPv6 and not doing too much yet.

    I have updated an old Netscreen 5GT to its highest recommended firmware (5.4.0r19) which supports IPv6 and enabled it. Setting up rules etc is pretty much the same as for IPv4 and I can't see there being any big issues there. I have two options for getting it on the net - get an IPv6 allocation from BT (working on it) or use a tunnel broker (for example Hurricane Electric). Either way I'll need to have my Netscreen 5GT directly connected to the internet outside our firewall and at the moment our incomming internet is wired directly to our firewall. I'll need to unplug our net and install a switch so I can split the incomming traffic, which means getting to the office before anyone else. Some day soon.

    One thing I still have not got my head arround is how to allocate IP addresses inside the company. I know the basic idea is the same as if we had an IPv4 /24 to play with, setting lots of different subnets up for different roles. My problem is just the sheer scale - with IPv6 we could have Thousands of subnets and being so used to the limited scope of IPv4 its hard to decide on a setup which is at the correct grain - too many subnets will be a pain to maintain and too few will lead to issues later on. Without experience it will be tricky to get that right. Also if I were to go for one subnet for printer class devices, one for each departments desktops, one for the main servers, one for our public facing servers etc how well would that interact with our current setup where all printers/desktops/internal servers are on the same subnet?  Having two seperate overlapping topographies could get interesting....

    Tuesday 18 January 2011

    Debian snmpd not listening on network interfaces

    By default snmpd on Debian listens on the local loopback address only.  To fix this you don't exit the /etc/snmp/snmpd.conf (that would be far too obvious).  The file you need to edit is /etc/default/snmpd and change the following line:

    SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1'
    to read
    SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid'

    Snmpd will now listen on all network interfaces.  While setting it to be more limited by default is a good design idea for security, I have trouble understanding the separate configuration location for this especially as if you look at the init script in /etc/init.d/snmpd that starts the daemon it has a SNMPDOPTS variable used when starting the service that does not include this restriction!  Following the startup to work out what is happening seems to indicate taht the restriction does not exist.  A little note in snmpd.conf mentioning this separate defaults file would be very helpful!