So our beloved General Manager, Brandon Simms, has just returned to work after spending last week in sunny Tampa, Florida. How nice. What, pray tell, was he doing down there – besides running a 5K race which included a lap of the Daytona 500 race track? (BTW, that's where that picture on the Facebook post was taken and the event it was concerning.) He was attending the BICSI Winter Conference & Exhibition. This is an annual conference where members of BICSI – which DataCom Inc. is – as well as individuals carrying BICSI certifications – which Brandon does – can meet for training via seminars on all aspects of IT. Also, there are numerous exhibitions of current and future practices and materials for those of us in this industry as well.
At one of the lectures, it was made known that data transfer speeds will be increasing in data centers, and very soon. Us old timers remember when 10Mbit/sec was considered fast. (I once sat and argued with a vendor about the speed of my dial-up modem. They ended the conversation with “Mr. Cross, we only guarantee 24kbit speeds, and that’s plenty fast!" Sheesh.) Happily, it wasn’t long before 10M became 100M. Today, 1Gbit is considered “ok”, but really everyone would prefer 10Gbit over their network. Now, everyone that had 10G was looking forward to 40G and 400G, but it seems that we’re not going in that direction. Nay, Nay. At the conference, it was stated that 800Gbit speeds will very soon become the norm!
Hold on one second; I’ve written blogs in the last year which detailed new cables that will reach out further than the standards allow. These cables are for factory scenarios and carry communications between machines; they carry machine language in the form of raw data. Why I’m mentioning this is that they are also designed to operate at the aforementioned, slower, 100Mbit speeds. It’s confusing to me; if these machines are ok running at these lower speeds, why then do we need the blisteringly fast communications I’m writing of today? But then it dawns on me that these are machines talking to machines - without any human involvment. No video, no multi-application transmissions, no timing, and, probably, very little quality control of the signal. Why not? Well, today's modern machines have built-in intuition. They "know" what to expect from their "brother' machines in any given situation. With the advent of AI (Artificial Intelligence), and with that functionality being built into mechanization, should any event occur of which the machine was not previously aware, they will have the ability to "learn"; and between what they already knew, and what they have learned, even if they receive "garbled" signals they will then be able to piece the data together and make the correct decision.
So ok, I realize that we're not talking about out on the factory floor, but still, 400G seems awful fast. Why the need for 800G in the data center? Well, allow me this: “By 2025, IDC says worldwide data will grow 61% to 175 zettabytes, with as much of the data residing in the cloud as in data centers.”*1 That’s by the year 2025 – just two years from now! So, at that crazy-high amount of traffic, data centers, especially those making up the cloud, simply must go faster to be able to commute that vast amount of data across the networks.
Mind blown? No? Well, at the conference, it was shared that: by 2030 that 175 number is expected to increase to over 2000. 2000+zettabytes! Oh, sorry, what’s a zettabyte? “A zettabyte is a trillion gigabytes.”*1 With more and more offices, homes, vehicles, and just plain ol' people, joining the IoT (Internet of Things) each and every day, and the slow-but-sure takeover of all things on this planet by AI , these numbers - the sheer astronomical amount of data being transferred every day - will grow exponentially!
*1 https://www.networkworld.com/article/3325397/idc-expect-175-zettabytes-of-data-worldwide-by-2025.html
Until 2022, the Metric Prefixes topped out at "Yotta" (Pronounced: Yoda. That's too funny). With the increases we're talking about, they added "Ronna" & "Quetta" (1 followed by 30 zero's). Will that be enough? If we've been paying attention, we know that the answer is - No. As for myself, my poor old mind, at this late stage, while able, quite simply doesn't want to contemplate that.
But this is why DataCom Inc. expends the time and expense to send our people to these conferences. This is where we get to see what is on the horizon. We’re thinking: “What part of what we learn can we bring back to the Mahoning Valley to share with our customers. What training do we need to get for our technicians and workers, today, to prepare for the future.” It truly is exciting, and while this old timer maybe won’t be working when it all comes to fruition, I’m happy it is more than guaranteed to occur for our younger workers; for them, and for DataCom’s customers, as they find out what portion of this future “fits” for their workplace.
The final result of what Brandon received from his attendance is Continuing Education Credits. These are mandatory to maintain his certification as an RCDD. This certification is awarded to individuals who have demonstrated superlative knowledge in all aspects of designing data networks and IT infrastructure. So if the numbers that I’ve shared in this blog are daunting to you, give us a call and we’ll have Brandon sit down with you to discuss how DataCom Inc. can best install today’s fast, smart data networks and systems at your facility that you may join the coming revolution.
Stand by for this to continue when I share exactly what kind of cables will be carrying this enormous amount of data to you.