Just a quick post to say i’ve posted my first piece of code in over 5 years to GitHub. Its a clever little Objective-C iOS Category on UIViewController that seemlessly overlays a UILabel on every single view controller managed view with the class, nib or storyboard name that is used. Great for debugging old or inherited projects with minefield architectures. It uses some cool libobjc runtime techniques to accomplish this, but implementing the category is a case of dropping it into your project and Build+Go!
Recently I began experimenting with KVM virtualisation in the Linux Kernel. Its a great technology that if your CPU supports VT-x / AMDV offers almost (really, almost) bare metal level performance inside Virtual Machines. It works on most Linux flavours and has a couple of handy management tools such as virsh and virt-manager. However, one thing I thought was always lacking and annoying me was of course, the ability to manage my Hypervisor from my iPhone / iPad when on the move! Time for an experiment I thought; then out came “KVM Remote”
KVM Remote on the iPad and 3 Different Remote Hypervisors
Its universal so works on both the iPhone and iPad and is extremely bleeding edge right now, but works! and is incidentally the first App i’ve made that doesn’t have selfish fiscal intentions, so theres another great reason to download it from the AppStore now!
P.S. i’ll be updating it regularly adding more features as requests come in.
Out the box the Raspberry PI comes with a ARM1176JZFS Core (armv6 with hard float aka armhf arch) running at 700 Mhz as part of the Broadcom SoC. Additionally the memory frequency is also limited. In recent firmwares however… tinkerers have had the ability to “overclock” the Raspberry PI to squeeze some extra juice out of it. Mine’s currently running at 1Ghz at a solid 48C temperature when under load. So the first question that springs to mind is… why doesn’t everyone overclock their Raspberry PI? Well… there have been (well founded) reports of SD card corruption, heat/power issues and instability. The idea of this post is to show the user how to safety squeeze every last bit, cycle and IOP out of their PI safely’ish and without being an astrophysicist. Read on for the know-how. Continue reading →
So in the last post I discussed why the Mac Mini is the perfect machine for Linux and for Datacenters in general! One frustration some readers may be finding is that the networking chipset used by Ivy Bridge platform in Late 2012 Mac Mini’s doesn’t have native support in the Linux Kernel (as of now anyway). So its required to install a kernel module from the manufacturer/vendor (broadcom).
On their website they provide the “tg3″ drivers for Linux kernels, however these are only good if you are running a Linux kernel < 3.5.x. If you take Ubuntu for example, 12.04 uses the 3.2.x stream, whereas 12.10 uses the 3.5.x stream and isn’t immediately compatible with the drivers on the broadcom page. This is due to the deprecation in 3.x and removal in 3.5.x of the asm/system.h header.
Well, i’ve used FreeNAS for around 2 years+ now, and all has been good, however, in that time demand for large quantities of storage has now been joined by demand for high speed storage; Once I had replaced all of my drives with 2tb 7200rpm drives I realised that FreeNAS wasn’t giving me the performance on each drive that i’d like.
Welcome NexentaStor… a storage appliance natively supporting ZFS as its based on OpenSolaris! NexentaStor offers many of the same features of FreeNAS, however at a greater level of performance. This comes at a cost though, the free Community edition is limited to a some what large 18tb, whereas the paid version will cost you.
Also NexentaStor is a pure storage appliance, although it supports CIFS/iSCSI/NFS and the likes, it does not have all the bells and whistles of FreeNAS… but for me, there is no use having all these features if I can’t have the speed.
I’m installing NexentaStor now as we speak, after which I’ll be posting a review / tutorial on NexentaStor after i’ve got it up and running and configured to my liking I hope you enjoy it!
Hi everyone! I keep getting lots of emails from people asking where they can buy xyz to complete the tutorials and try out some of the things listed on CaptainGeek, well after I kept emailing people the same links i had a thought, why not setup an amazon affiliate store. Basically, i’ve setup a small amazon site with a small selection of products (only those used for the tutorials on this site + related ones), purchases and payments are handled by amazon, however a small percentage of the sale goes to helping fund the server this website is hosted on AT NO EXTRA COST TO YOU so its a win win situation, please use the links whenever you can.
Ever wanted to be able to access the shared libraries from your home… when you are away from home? I do, since I started to use FreeNAS with Firefly iTunes/DAAP media server this is exactly what i want… I’m often away from home, and always wanting to access my media library from within iTunes…
Bonjour (mDNS) is what the iTunes / Firefly DAAP server uses to advertise a “beacon” of your iTunes shared library to your local subnet (LAN), however this is restricted (due to industry pressure (RIAA)) to local area only, and not wide area (the Internet) as it once was… However its still easy to circumvent and work around this issue so you can listen to your shared libraries on the go!
This tutorial is aimed at Mac users, but the concept is possible on every OS using the appropriate tools…
Recently i had a thought… Most of my machines are sitting redundant and have upto 4 drives in each… without unscrewing every single one of out of my rack… i want to utilise all of that space into one giant zpool using ZFS.
Imagine combining the drive space resources of 10 computers into 1 giant drive? see where i’m going with this now?
So my idea is to make the drives in each machine available to the “ZFS Master” (the solaris box running the ZFS pool) via iSCSI which is a sort of “offer your drives at a block level over ethernet” protocol… then add them all into a giant zpool… the advantages of this are:
Utilising all of my hardware
iSCSI can work over WAN so i could use boxes i have in other cities
Have each Lun “individual computer + drives” power up via WOL (wake on lan) initiated by the ZFS Master
Greater level of redundancy possible.
Backup “ZFS Master” possible
Everything connected via either Gigabit Ethernet or Fibre Channel.
So imagine.. a rack full of computers with hard drives in them… at the bottom is a more powerful computer running solaris which mounts the hdd’s of every single other computer and adds them into the zpool… then advertises this zpool over AFP / SMB to computers in my house…
Yet another way to make a α size tb system out of old free components that could possibly outperform a £20,000 solution! =) I’ll post all of the results of my testing after the break in a few days
P.S. This idea is without considering performance as that is something i can work out later & thanks bda for your advice
Have you seen the Drobo box? it’s a SAN that allows you to create giant volumes and hot swap out hard drives at will with failure tolerance… bad news, is that it costs close to £1000 even without the drives, i’ll explain how to make a better one… for free! =).
ZFS (Zettabyte Filing System) is Sun’s newest file-system offering, its supported on FreeBSD / Solaris natively and Mac OS X / Linux / Windows via third-party utilities. I’m gonna keep this guide, simple, short and sweet, so i’ll bullet list the main features that wow people about ZFS =)
It can store up to 340 quadrillion zettabytes of data (no other production filing system can do this)
It checksum’s your data on the fly so you can check for integrity by “scrubbing” it (identifying broken drives before they completely die)
It supports every raid configuration you can think of natively and doesn’t suffer from the raid5 data-hole.
You can create snapshots of your data that do not waste hdd capacity.
Volumes or “Pools” can be expanded at any time, so you can start with a 2tb raid, and increase it to a 10tb raid with no data loss.
You can mix/match capacities, brands, rpm’s of drives.
Its reliable* (on officially supported incarnations anyway)
Its a memory whore (don’t try it unless you have 2gb ram on your system)
Its supported in the latest version of FreeNAS (0.7)
Allows hotplugging of drives when one fails (so you don’t lose data/time)
Hotspares are supported
Can be easily transferred / transported to any other ZFS supported system without extensive configuration or any data loss.
Its free free free free (under CDDL).
Think of a hardware raid5 or a geom_concat/raid and then think about those again, but without any of the issues / flaws they have… thats what ZFS is! =)
So lets get started, I’ll run through creating and bringing a ZFS raid online first, and then some maintenance commands afterwards. I suggest trying this on a Continue reading →
Setting up IPV6 connectivity may at first seem daunting and complicated, and yes… it is, but i’m gonna make it really easy for you, i’m not going to assume you understand IPV6 and i’m not going to try and teach you, thats for you to research yourself, however in this guide im going to document the easiest way to setup IPV6 connectivity, whether you’re behind a NAT / router or directly connected to the Internet. In future tutorials I will go into more detail including router / subnet configuration, the 6in4 method and much more!
Setting up an IPV6 Tunnel behind a NAT / Router (using TSP)
Setting up basic IPV6 connectivity is as simple as that with TSP, in future tutorials I will write how-to’s on how to setup a router configuration, the 6in4 method which has less overheads but doesn’t support NAT traversal, and OpenWRT Configuration! Enjoy!