Keith Smith - My Blog

Journal of thoughts

Keith Smith - My Blog > Journal of thoughts

Life and Work Balance

Monday, February 22, 2016 - Posted by Keith A. Smith, in Journal of thoughts

This Post is private, you need to be a active susbcriber to vew this Post. Click here to Subscribe
View Comments 0 Comments
Share Post   

vCenter 6.0 interface sucks

Thursday, September 10, 2015 - Posted by Keith A. Smith, in VMware, Journal of thoughts

As mentioned here, I finally made the move back to vSphere and decided to go with version 6. There has been chatter over the past few years that, the release of a great new web client was coming. Well, it finally came and honestly, they would have been better off sticking with the fat client. Why..? You may ask. Because the interface is Flash! Yes, the same Flash that should have been deprecated by now, the same Flash that has more Zero-day vulnerabilities than a tennis net has holes. I don't understand why any company would develop an interface in Flash or Java at this point. I can here some people at VMware saying if we were to develop the vSphere 6 web client in HTML5 it would take more time. I think most if not all customers would say ok take the time, because HTML5 is the best way to go point blank!

I will say this, the vCenter doesn't totally suck


1) New platform architecture 

2) Upgrade process is supposed to be a lot easier


1) Uses Flash — vulnerabilities, vulnerabilities, usability is terrible.

2) Uses Java — A catastrophe, Issues with every Java upgrade that are compatibility related,"security enhancements", vulnerabilities

3) Uses browsers & plugins — impacted by browser releases or changes, versions, vulnerabilities

I think that VMware needs to be more transparent about what they are doing with the replacement of this terrible Flash interface. I also think that they need to keep the TAM's informed so they can keep customers apprised of the progress.


View Comments 1 Comments
Share Post   

Our time in Seattle

Wednesday, August 26, 2015 - Posted by Keith A. Smith, in Journal of thoughts

This Post is private, you need to be a active susbcriber to vew this Post. Click here to Subscribe
View Comments 0 Comments
Share Post   

My Thoughts on Docker

Tuesday, June 30, 2015 - Posted by Keith A. Smith, in Journal of thoughts

Docker uses Linux LXC to encapsulate a fixed environment into which you have built some software that depends on a stable config and wants isolation from everything else. To the software it feels like it is alone on a machine, but actually it is alone in what Docker calls a container.

You can have 100s to 1000s of containers running on one machine. You can also group containers together to make larger projects. Obviously with the encapsulation, you can patch or upgrade the OS without any fear it will break something running in a container. Unlike VMware the encapsulation is not on the chip level with a hypervisor, but on the OS level. So those big servers you have, the ones that can easily run heaps of things, but you don't really want to have heaps of VMs? (which really just passes the update/patch/reboot buck, if you think about it), these can run heaps of Containers. In my opinion Docker reminds me of bsd 4.0 jails, developed in 2000 for a hosting company which predated Solaris zones.

Docker has many concerns, particularly around security, which in turn can be a gating or otherwise limiting factor in acceptance by several industries. This is the nature of open source - there are many options as every individual that disagrees with someone else spawns his own solution addressing what he views as the most important problems. In the end there are many container options, even just in the case of linux.

Companies that have embraced container-based virtualization more often have more than just one such technology in place. This year's openstack summit showed this strongly. I do see great potential with containers. One of the caveats i am facing right now when designing my potential future architecture, it is redundancy/availability. There is no live migration of containers. So you have to consider that. You would have redundant containers, but i can see where IP addressing can get a bit complicated when using keep-alive or ucarp. And this is because they wouldn't work at the container level, but at the docker host level. If you lose a container, the virtual IPs wouldn't be active on the other host. And docker uses its own network addressing for the containers. Therefore, essentially each docker host is a "router".
View Comments 0 Comments
Share Post   

The start of the madness

Friday, August 29, 2014 - Posted by Keith A. Smith, in Network, Xen, Journal of thoughts

After deciding to cut the cord in February of 2014 I thought I should build a network to support our entertainment needs. I cancelled our FIOS tv service because of the annual rate hikes and went internet only in order to save more $$$, besides we didn't watch a whole lot of tv and when we did it was only certain channels.  After killing the tv service i was to negotiate a bump in the bandwidth from 25/25 to 75/75 which was much needed. I started by purchasing a box of CAT6 and since i already had the other items (e.g. connectors, crimper, etc.) I made a weekend project out it. I put in drops in every room and in a few other areas which was a pain to get to, those areas were costly because i put holes in the ceiling while in the attic. Next i purchased the 1513+ synology nas for about $842 from amazon in july of 2014, I got it diskless because i didn't know what drives i wanted to put in it at the time. I settled on 5 of the Western Digital Caviar Green 3 TB SATA III drives which ran about $674 from tigerdirect.

At this point i had to make a call on what switch and new firewall i was going to use, i thought to go cisco and grab a 3750x along with an ASA 5510. That never happen because IOS requires you to have SMARTnet to download the bits now, so with that i moved on to HP (which used to be 3com) and i used those switches before and they worked great. I managed to fined a 1810g procurve managed switch from amazon for $169, i then started doing some research on firewalls again. It now was down to juniper,fortinet and sonicwall, i always liked sonicwall along with juniper but sonicwall was still more than what i wanted to pay and juniper seemed limited on throughput in the price range i was looking in. I checked out fortinet but i still wanted to find something else to compare it to, i somehow stumbled upon the watchguard line.

I did some deeper internet research on the watchguard products and i liked what i saw on them. I managed to find a demo of what the web interface was like from a management stand point and i was sold on it, at that point i started looking for models and prices for watchguard. The T10 ended up being the one i was willing to start out with, i purchased it from Newegg for $200 and the license from cdw for $60. All the network gear arrived on a Friday which was perfect because i would have time to get it all setup over the weekend, i started with the firewall thinking it would be the fastest to setup. I was wrong on that thought....i setup the rules that was needed along with the vlans on the 1810g, the main issue was that nothing had outbound access to the internet. I tinkered with the rule base for hours, i then came to point where i knew i had setup everything correctly and the cause had to be something else. It was late (around 2am) i went to sleep because i was out of ideas at the time and kids were driving me nuts because they couldn't watch tv thanks to me.

I woke up around 7ish to get back at it, i finished the config on the switch and i was sure that i setup the firewall correctly but still no outbound traffic was allowed. I did a lot of internet research but didn't find anything that really helped, i proceeded to review all the docs that came with the T10 again to see if it was something that i missed. At this point it was around 7pm Saturday and i was able to find everything i needed to call support because i had a thought that perhaps this device needed to be activated before use. After speaking to support i was right, they have a live subscription that needs to be activated so we took care of that and bam outbound internet access. It's always the small things that cause the bigger issues, once that was resolved i was able to bring all the amazon fire tv's up along with the wi-fi.

Now that the internet was up i could move to the NAS. I setup the 1513+ synology with the 3TB drives i bought and setup the lacp along with the bond, that was pain mostly because of the way i setup the interfaces on the switch. For some reason the 14, 16, 18, and 20 were apart of trunk4 but the trunk it self was untagged and the ports were still tagged. I removed the ports from the trunk then made sure they were on vlan4 and untagged, then i put them back into trunk4 as members with LACP and it works like a champ 4GBPS on the throughput. After that i migrated all my data from all the "cloud" services, once that was done i enabled some of the sync features so i could get the things i needed while on the go.

The next thing i figured i would work on would be the wifi service improvements, my old cisco/linksys router wrt350n was due to be relocate to light duty since it was the edge gateway/router/wifi ap. I started looking around for the newest wifi routers out on the market, for me it came down to the Asus's RT-AC68U and the netgear nighthawk triband router. The features were about the same so it came down to price, i went with the Asus's RT-AC68U from amazon for $199 and i haven't looked back since. I used the default merlin firmware that came with the Asus's RT-AC68U but it couldn't achieve all that i wanted so i ended up flashing it with dd-wrt which i had used before on previous devices, i was able to setup my hp printer on it so we could print wirelessly but i could get the guest network setup work as i needed it to.

The guest network was not stable and it was really because of a bug in the dhcpd, after doing much testing and research i found that it was some sort of issue with the dhcpd on the version of dd-wrt i was running. Enter the wrt350n once again...this time i set it up on its own vlan to for guest wifi devices that needed internet only, this way i could have a proper "guest network".

A few months went by then i started working on things again, i purchased a tv/wall mount kit for my mancave and setup my xbox along with a mac mini for entertainment. I also got a few dell optiplex 780's that had been retired from work, i setup xenserver on those and connected them to the 1513+. I started looking at the core of the network and thought well i should buy a rack now so i can organize everything because everything worked but it was an eye sore. I didn't want a 42U rack because i knew i would never have that much gear, i found a neat little Tripp Lite SRW12US 12U Wall Mount Rack Enclosure Server Cabinet on ebay. The specs were perfect on it

Height    25"
Width    23.6"
Depth    21.6"
Rack Width    19"
Rack Height    12U

They seemed to sell in the $400 range on ebay and amazon, which to me seemed to be a bit much for a 12U rack. I spotted one on ebay which was in bidding state, i snipped from everyone at the last minute for $132. At that price it was a total steal and it came with the case nuts along keys for the doors. I bought a universal rack tray to sit the nas on, i also bought another 2gig module for $50 for the 1513+, wire organizer panel $18 and a rackmount PDU for $40 all from amazon. I re-wired all the cables for everything that was close and connected to the 1810g, then i installed everything into the rack. It was sort painful at the time of doing some of the work but end the end it was all worth it and looking back i would even say that it was fun, the next and thing i have on my list is to obtain more powerful servers that will be my next set of hypervisors, i thought to build my own but it looks like it cost around $2000 or so to do that. I have moved on from that idea and looking at used servers that will have enough resources (CPU & RAM) to support the vm's that i want to run, the tough part is finding enterprise type servers that will fit in my small rack.

I started looking at older sun and apple servers on ebay because they were cheap, i had a thought to check the HCL for xenserver to make sure this was going to work. I found out that other people had managed to get some versions of xen on to sun and apple servers but i didn't want to chance it, i did decide to use the HCL as a guide that could help me find me next set of servers. I started looking at the dell models and checking out the chassis specs to make sure that the server would fit in the rack, i found a poweredge r210 which looked like it would fit the bill. I ended up buying a 2 of the poweredge r210's and more ram to max them out at 32GB each, after receiving them i went ahead and unpacked them. Anytime i order a used server i check to make sure everything is seated properly (e.g. ram, processor, etc) so far so good, so i rack them and proceed to power them on so i can get an idea of just how noise these servers are going to be together. I let them run for a few hours and i determine that they aren't as loud as a normal 1U server would be, but still a bit too noisy for my liking, so i power them off and un-rack them so i can inspect the fans because they are always the culprit for noisy servers. I did notice that one of the servers was slightly noise-yer than the other, upon my 2nd inspection i notice that they have miss matching fans in them so i decided to order more and remove 1 fan from each. The servers run very quitely now, which is exactly what i wanted.
View Comments 0 Comments
Share Post   

Page  <1234>