Good day to you all. It has been some time since I last laid down a blog post and it is fitting that my first one back from months of silence is about Hyperconvergence. As most of you know I was working on building my own HCI playground as I am a kind of DIY guy and did so successfully with KVM, open source object based software defined storage, some aspects of fuse, and NFS. This took me some time to get built and working as I do have a day job. After seeing all the buzz in the HCI space and loving new technology I decided to join the team at Cisco to help in their efforts to deliver the next generation in Hyperconverged solutions.
Now being a veteran in the storage industry for over 18 years (damn where did time go) I have seen many different solutions around how to get better economics out of technology. From switching to things like IP networks from ATM, replacing Motorola X.25 modems with well faster cheaper connectivity options, to optimizing the utilization of storage and compute with converged SAN/NAS architectures and virtualization. It has been an awesome adventure and one I continue to learn from and be amazed by.
Hyperconvergence has been around for many years with my first thought, we’ll call it HCI v1.0, being mainframe. Hyperconvergence v2.1 takes the best of all worlds and “puts the disk back into the server” (to quote TC) as it does with many other aspects of our complicated technology stacks. You may be asking, what happened to version 2.0, well most of you have seen version 2.0 in the first iterations of HCI (Hyperconverged Infrastructures). But before we get into what the next generation of Hyperconvergence is today I’d like to take you all down memory lane.
How many of you still have architectures like this in you environment. I would guess many if not most have some iteration of the above. I do not see this going away anytime soon and as a similar argument goes, tape is dead because of the options now with cloud, scale out inexpensive storage, backup appliances with deduplication, and many other reasons yet tape is still in the market today maybe not as strong as it was in early the early 2000s but there are still bits of data that need to be kept on tape. Yes, yes, I too argue this from time to time but still think tape is not entirely dead. The same goes for converged infrastructures, we will continue to rely on these foundational architectures for many years to come but like tape these types of architectures will soon become the niche and not the norm.
As Hyperconvergence gets off the ground here in the coming years you will see architectures transform from the above to the below architecture and some hybrid versions between the two.
The main reason I like to discuss the old with the new is you can’t say every workload is a nail and all you’ll ever need is a hammer. This is naive to think that we have come this far in such a short period of time that HCI is the answer to all your technology problems, but is a new foundation which we can start to leverage for 70% to 80% of virtualized workloads today.
Taking Hyperconvergence to the next phase in its life has been a long road. One filled with new innovative software defined architectures built from the ground up to truly elevate HCI to another level that was not considered before in the past. Including networking, software defined storage, virtualization, simplified management, and next generation server architectures with Cisco’s UCS for compute and storage. HyperFlex is the next generation in Hyperconvergence.
What is it that we focus on when discussing Cisco’s HyperFlex HX series appliances? I’ll tell you:
- Enterprise grade with robust data integrity, continuous availability, proactive auto support, and fast efficient snapshots that do not compromise performance
- Maximum simplicity providing super-fast installations, simple management through what you know today within your virtualization environment, and rapid cloning for VM provisioning
- Economical scaling with inline deduplication and compression, scale out just in time architecture, scale compute and storage independently, and cost effective with in-house cloud economics
How many of you have had to clone infrastructures either for VDI, VSI, Test and development, or class room labs that need to quickly be rebuilt for the next class. Well with VDI we have seen this but not so much in the other areas and I can say from experience of cloning a single server in 15 minutes to 25 minutes recreating an environment of even 10 servers could take hours and depending on the complexity of the environment even longer. What if I could take the cloning of servers in your environment and reduce that to minutes if not seconds depending on how many. How about snapshots in your VMWare infrastructure, ever had to consolidate manually either because the IO on the back end was too over run with other tasks that the consolidation of snaps even after backups was challenging? All of this and more are reasons we are looking to streamline how we manage our day to day within our virtual environments. HyperFlex gives you this with snaps that do not impact performance the more you have, clones in minutes or even seconds reducing the time to spin up environments while managing these functions from an environment that you already know like the back of your hand.
Building on the right foundation from the ground up to including enterprise hardware with UCS on a platform built for Hyperconvergence with a purpose built log structured file system designed for scale-out, distributed storage leveraging advanced data services such as near instant snaps and fast clones as well as data optimization with inline dedupe and compression without trade-offs that also includes the network, you now have the next generation of Hyperconvergence with Cisco’s HyperFlex HX series appliances.
Below is a video of the solution I have been discussing in this post. Please check out all the buzz around Cisco’s HyperFlex solution and I look forward to further discussions and a deeper dive into the technology in future posts.