FAST Data Assets: A Business Must Have
These days my day job has my mind fully occupied 24*7/365 helping to position the value of FAST Data found in a fast data management capability we offer to DC Operators as software for their Linux storage systems serving MSPs and their customers. How these Operators acquire such capability and have their own MSP customers pay for it is simple in our case. How the MSPs must then integrate and package up the capability of FAST data as part of their own SaaS Software as a Service offer and, get paid for it per seat, with the FAST data differentiation showing through so as to attract more customers than their competitors, is a completely different challenge.
In 2004 Dr. David Hackathorn wrote “ The BI watch real-time to real-value” source: https://www.researchgate.net/publication/228498840_The_BI_watch_real-time_to_real-value
which when described in a single picture looks like this:
The ‘data stored’ is clearly these days a ‘write to storage media’ attached to some computer somewhere measured as IOPs defined by Wikipedia as:
“IOPS (Input/Output operations per second) is a performance benchmark used to measure the speed and efficiency of computer storage devices like hard disk drives, solid-state drives, and storage area networks” source: https://en.wikipedia.org/wiki/IOPS
‘Information delivered’ in simple IT data center terms is a ‘data read’.
All along the way time delay between these events are framed by Hackathorn as ‘latencies’.
The idea of those seeking fortunes in the SaaS market is “Get rid of as much latency as possibly during your journey to deliver value to end users and you have a winning formula in making the parties involved in creating a business or operations event wait less for the outcome or action to take place”.
For many years the focus in the ‘Information Age’ has been centred around “Data as the New Gold”
source: https://www.masterschool.com/magazine/data-is-the-new-gold/
recently published by Masterschool Jul 3, 2023, they point out:
“What’s important to understand is that the value in data isn’t its rarity — the value is the potential and what we’re able to do with it.”
So while DC operators may offer up infrastructure as a service with fast FLASH storage IOPs which can be utilized by databases serving upstream application servers and web servers or fancy microservices middle ware and UI front ends, there is no guarantee that raw read and write speed will actually surface to make the customer experience go faster or the workload complete in say half the time it did in the past.
Hyperscalers have in fact (Google, Amazon in particular) have completely skirted the issue of hardware storage being a bottleneck by creating Cloud storage services that simply cache everything in memory and when they get a chance, they write the customer finalized data to their own storage systems, making the user believe ‘up front’ their data has actually been written in world record time to storage media, when it fact they have not completed such a herculean feat, and instead, are relying on their very expensive battery backup or diesel backup power systems to keep the memory and compute operating long enough to write the data to permanent storage in the event of a power failure, like those happening in California as I write this post. All good for them and the customer, as long as the customer is willing to pay 50X the cost of the storage media to write their data to a bunch of RAM and trust these hyper-scalers will get their data to permanent storage media at some point.
If data is the new gold, then FAST data is indeed by any measure the new platinum, provided one can apply the FAST part to those areas of “Components of Action Time” which will have the most effect on shrinking latency along the way.
So what does FAST data look like in the form of FAST FLASH Storage?
Well for one thing, reducing the number of times data is re-written to de-fragment, delete old files and place data on areas of the media that will not lose a charge level, flipping a 1 to 0 (so called bit rot) becomes very important to all FLASH Storage systems supporting MSP SaaS services, as one wants their FLASH storage infrastructure to spend as much time as possible ‘in-band’ servicing CX and workload reads and writes and as a little time keeping the FLASH storage organized, the latter which is instrumental in keeping the FLASH reads and writes fast (data written as one contiguous chain of data blocks grouping same session blocks together written to a long empty space in media is always written much faster than data written to random openings, likewise data read from the same long written space is always much faster, than if read from bunch of random locations) .
That said, FAST Data only becomes a valuable asset when implemented in key parts of the “event to action” flow described by Hackathorn in a cost effective manner.
Most anyone with a large enough bank account and budget can build a FAST data FLASH storage Solution to support their SaaS so they take the lion’s share of their market, at least in an up market they can. The old “kill it with hardware at any cost” model of IT infrastructure investment and build works every time, when times are good.
When market’s slow however, IT budget spend slows and tightens and, the evaluation cycle for new technology adoption lengthens to test the staying power of all FLASH Storage vendors while they are compared more closely by Infrastructure Operators , as even the biggest of companies search for more IT savings and the ability to lower prices without losing customers in tougher economic times.
Those FLASH storage system vendors that come up short in not reducing their write count for these ‘out of band’ FLASH Storage necessary actions will have ‘half FAST storage solutions’ which are less likely to win the day or the deal.
Those that reduce their write counts, often referred to in the FLASH Storage Industry as ‘Write Amplification Reduction” or more likely to win the day and the deal, not only because they are Faster speeding CX and shrinking Workload completion times alike, but also these same winning solutions do in fact increase FLASH drive life by 2X or more and reduce power consumption to improve the DC Operator’s margins significantly, where some of that win fall gets passed on to their MSP customers who then are able to pass some of that winning FAST Data value on to their customer experience via their own SaaS offering.
Those that do that write reduction with custom hardware at additional cost are less likely to win given Intel and AMD computational horsepower increases and ‘built in’ hardware compression in their multicore 64bit architectures, which when mated to a software fast data management software achieves the same level of write reductions at significantly lower cost than custom flash hardware solutions.
So those vendors in the FLASH storage arena with less Write Amplification implemented in software on Linux are likely to win the day and the deal more often with the DC operator than their lesser counterparts, because these software FAST data management implementations for FLASH fully deliver the best form of FAST Data Asset with the most share-able margin, some of which is passed on by their MSP customers to their own SaaS subscribers, likely served up from 50% less costly colocation facilities these days,
making FAST Data Assets implemented in software for FLASH Storage the new Platinum must have asset for Business.