Latest Comments

gary

In response to: Coraid: Why They Matter

gary [Visitor]
Thanks for the insight. We are a small Start-Up IT company and we are looking at utilizing Coraid here in the near future. This helps put my mind at ease even more. Coraid's vision plays right in with what we like to do, which is to offer something different, or against the grain if you will, to our customers.
PermalinkPermalink 02/02/14 @ 09:27
Todd A Johnston

In response to: Gartner Storage MQ: An Analysis of the Analysis

Todd A Johnston [Visitor]
Appreciate the matter of facts and willingness to call out "Gartner" bloat vs. insights. It's lacking in many capacities however, it a start.

Perhaps 1st base. Look forward to you're continued storage "net net"

-TAJ
PermalinkPermalink 10/15/13 @ 16:16
CarlinaD

In response to: Gartner Storage MQ: An Analysis of the Analysis

CarlinaD [Visitor]
Thanks for this "analysis." I'm disappointed, though. I was really hoping Gartner's MQ would give me some insight on this.
PermalinkPermalink 04/24/13 @ 21:44
Luigi

In response to: Gartner Storage MQ: An Analysis of the Analysis

Luigi [Visitor]
I agree with you here. Very nicely put.
PermalinkPermalink 04/23/13 @ 23:33

In response to: Gartner Storage MQ: An Analysis of the Analysis

Tony Asaro [Member]
Thank you Jim. I rarely am told I am too polite - haha. I am not a Gartner "hater" and feel they do play a role in the industry. They dropped the ball big time on this. I really don't see how anyone can get any real value out of their latest Storage MQ.
PermalinkPermalink 04/16/13 @ 16:47
JimG

In response to: Gartner Storage MQ: An Analysis of the Analysis

JimG [Visitor]
Nice review Tony, should probably read "An Analysis of Gartner's lack of Analysis". Maybe you were just too polite.
PermalinkPermalink 04/11/13 @ 04:35

In response to: What's In Store 2011

Tony Asaro [Member]
Sure - but we are in 2011 and I know that is what you meant. But I think I was pretty spot on.
PermalinkPermalink 09/27/11 @ 16:02
Sean Guerreso

In response to: What's In Store 2011

Sean Guerreso [Visitor]
I am interested to hear how you see the storage industry now in the late part of 2012 vs. your earlier predictions
PermalinkPermalink 09/26/11 @ 12:12

In response to: Primary Dedupe: The Next Big Thing in Storage

Tony Asaro [Member]
Trey - agreed that both performance and scalability are barriers. I also agree that Data Domain works well as a backup target and that primary I/O is very different and that is why DataDomain will never find its way into primary. I also agree that data compression is easier to implement with primary than dedupe. It appears I agree with everything you said!

However, I do not think performance and scalability are insurmountable issues. Especially if you talk to the Permabit guys - they say they have an architecture that conquers both so it will be interesting to see one of their OEMs bring their dedupe to market.

Additionally, most data is dormant within a very short window afters its creation. And processors and memory keep getting faster and faster.

I also believe data compression is valuable but even more so when you combine it with dedupe.

I am convinced that it is inevitable and that it will become pervasive- it is an issue of time but I believe we are close.

PermalinkPermalink 07/02/11 @ 10:07
Trey Tharp

In response to: Primary Dedupe: The Next Big Thing in Storage

Trey Tharp [Visitor]
I feel the reason foe lack of mass adoption is the performance impacts and scalability. I feel Data Domain is probably one of the fastest dedupe players and they get data in a perfect way, sequentially with a static block size. When you look at 80% and higher random workloads of varying block sizes I see where it could be a challenge.

I'm looking at primary compression being available before dedupe, but that's just my opinion.
PermalinkPermalink 06/30/11 @ 18:29
Ian Duncan

In response to: The End of NAS

Ian Duncan [Visitor]
Tony,

Identification and the subsequent migration are clearly the major challenges. In the interests of disclosure I work for a storage company that focuses exclusively on storage for long-term data retention. We regularly see customers who don't have a dedicated data mover (either an archive or ECM application or something home-grown) address project data first (it's easier to identify, it's less likely to be accessed regularly and in many cases it's not inextricably linked to an application which causes the stubbing issues you mention). To your point about 'where' do you place your long term data this issue historically has been that it's a binary decision - either it stays on disk (just because it's easiest and it checks the 'it's there if I need it' box) or it goes to tape (it's cheap) but then they worry that on the off-chance that they do want to retrieve the data they'll appear to be sluggish in the eyes of their users. I think the future state is actually a tiered archive (as opposed to an archive tier) - a platform that can balance retrieval, retention and recovery for data and if the performance is adequate then why not place the data there initially (thus mitigating the original concerns about migration).

Ian
PermalinkPermalink 03/16/11 @ 15:06

In response to: The End of NAS

Tony Asaro [Member]
Ian - I believe there are two challenges - there are too few solutions that can provide the discovery you need and then move that file data efficiently, out-of-band and heterogeneously. And even if you did - what do you move it to? There are solutions out there but they are still relatively new, or they were just provided by startups, or they just haven't been marketed sufficiently. I think companies are open to a solution but no one has stepped up with the answer yet.
PermalinkPermalink 03/09/11 @ 11:25
Ian Duncan

In response to: The End of NAS

Ian Duncan [Visitor]
Tony,

Interesting view - I agree wholeheartedly that there is too much cold data on expensive, high-performance systems. It could be argued that Scale-Out NAS is actually going to perpetuate the issue (the easiest thing to do is keep doing what you're doing and if the Tier 1 systems allow you to get bigger then you could keep doing stupid things for longer). The issue of why people don't put in place a dedicated archive tier goes beyond migration (and/or stubbing) - it needs to be economically very compelling (I'd suggest somewhere in the region of 25% of the cost of doing what you're already doing) and non-disruptive to the work-flow. This raises the point you make about an application still being able to find the data when it's moved but even where there isn't an application that's accessing that content (general file shares etc) you still need to be able to balance retrieval needs (albeit most likely on a very infrequent basis) and retention (whether it's for compliance, governance or just plain 'keeping-it-for-as-long-as-I-think-I-should-have-it-around-reasons'). I do believe that the shift in recent years to much more ubiquitous disk-based DR tiers is likely to be the straw that breaks the camels back for many customers. They may be happy with their primary and DR tiers in terms of performance and scalability but every new MB of information that is generated is now needing to be stored, managed and protected across multiple, expensive and high performance tiers and as such they are at least open to the idea that a dedicated archive tier (one that is is designed exclusively for retrieval and retention) might make more sense...
PermalinkPermalink 03/07/11 @ 17:35

In response to: The End of NAS

Tony Asaro [Member]
Greg - thanks for engaging in a discussion on my blog.

The issue of SSDs to me is a question of price/performance. I think the only way to make SSDs universally valuable to customers is leveraging sub-LUN tiering. This is technology that moves dormant data within a volume and moves it to lower cost storage - effectively spreading a volume across different cost tiers of storage within an array. Depending on the vendor the size of the "page" will range - 2k, 42 MB, 256 MB, etc.

Since you are joining Dell you will be happy to know that Compellent is the leader in this technology and does it at the 2k page size (arguably smaller is better in this case). They also uniquely can move active read data to lower tiers - which is valuable and cool.

PermalinkPermalink 01/17/11 @ 11:10

In response to: Dell and Compellent: The Implications

Tony Asaro [Member]
Hi Greg - thanks for the comment. I didn't say that IBM didn't have storage DNA - I said they had "little" storage DNA within their sales organization. And this is relative to EMC and NetApp. Both EMC and NetAp have thousands of people in their sales organizations that wake up every day thinking about how they can sell more storage.

Also - understand the context that I was saying it in - in spite of the fact that HP and IBM don't have thousands of people dedicated to selling storage they still do massive amounts of business because of the attached rate of servers. Since Dell also is a server vendor they can leverage this to sell more Compellent storage. What would happen if Dell actually built its storage sales force to match EMC and NetApp? With Compellent and EqualLogic in their portfolio I think they would be a force to be reckoned with.
PermalinkPermalink 01/17/11 @ 11:02
Greg Carter

In response to: The End of NAS

Greg Carter [Visitor]
I understand why SSD's aren't a complete panacea when incorporated into block storage systems but, being aware of disk enclosure bandwidth limitations and the cost of SS's.

Understanding where NAS came from and where it plays (sold for Auspex), I wonder which technical issues/limitations have prevented someone like NetApp from incorporating SSD's into arrays managed by the "SAN Mode."

As I said, I know what some of hte challenges are, but I don't think SAN Mode has been a game changer, and as innovative as NetApp and others have been, I would think they would have looked at this unless there were some serious obstacles.

Any thoughts on this (and feel free to let me know if this is a stupid question though I have to admit I will be very hurt and offended :-) would be appreciated and of interest.
PermalinkPermalink 01/14/11 @ 20:02
Greg Carter

In response to: Dell and Compellent: The Implications

Greg Carter [Visitor]
Interesting stuff, particularly since I am joining Dell's storage team in the near future, have sold CX3, and was an IBM Storage Specialist.

I don't have any skin in the game - I left IBM in 2002 and am stilled ticked because I not only hit my numbers, but was given the Vice President's Award 5 months before I got the bad news. Having said that, I am struggling to understand why you don't think IBM has 'storage DNA.' Perhaps not in your case, but I have found that many people have this impression because IBM has leveraged partnerships with LSI (midrange) and NetApp (NAS) over the past 10 years. So I get why people would get the wrong impression.

As far as what an IBM bigot would argue, they contend that IBM 'invented storage' which, while not indisputably true, means that some very smart people see it that way. That indicates a few spindles of DNA, don't you think?

But their DS8000 line has done fairly well and has become rock solid, or so I am told by very senior and very fair-minded people I know who work for IBM and IBM business partners. Perhaps you are correct that their major success in when selling into IBM shops, and that IS a huge difference compared to the server-agnostic approach EMC and HDS have been able to use. So this fuels an incorrect impression that IBM thinks of storage as a sidebar, and that impression is certainly not completely incorrect.

From what I have seen myself, and from what my IBM friends tell me, the v7000 has the potential to be a giant. Endowing a midrange system with SVC capabilities will give this system very high end functionality and fault tolerance.

However, fortunately for Dell, EMC and other storage vendors, IBM has proven time and time again to be inept in selling their innovations, even when the technology is as good or better, even when their price/performance is off the charts better than DMX. EMC's 'never goes down' claim is rooted in their strategy of selling an overabundance of mediocre hardware with excellent software, and screwing the hell out of their customerse with software and software maintenance costs. If something costs $4M compared to $1.5M and has uptime that is marginally better than an IBM DS8000, which is the better value? The answer, as always, is 'it depends.' I believe that if IBM doesn't price the v7000 out of the market, and if they will 'incent' ALL sales people to sell it, they will make some strides. Note I haven't mentioned XIV and I won't - I hear customers praise the simplicity of the GUI, and that is cool, but until they work out some data protection issues (maybe they have already, I haven't kept track) they will never be considered to be enterprise class. Their claim to instanteous restoration is a canard - it doesn't rebuild the disks in minutes, it simply restores access to the data very quickly. Perhaps those are equivalent statements but, if so, why do these EMC-bred sales people misrepresent this? THAT, in my opinion, is EMC's true 'DNA.'

Finally, as far as IBM's storage DNA, how about the fact that IBM registers more storage-related patents annually than all other technology companies combined? That's alludes to as much or more 'storage DNA' than its competitors. I would respectfully reiterate my guess that these statements are based on IBM's lack of market penetration and presence which is certainly irrelevant to DNA. Of course, I could be wrong, and often am. I can tell you are a senior expert who I could learn a great deal from, so I hope you won't take this the wrong way.

Now, as far as relying on business partners - that is undeniable in the SMB space, and debatable in the Enterprise space. They do this with their SMB not because they lack significant storage technical and sales staffing. They do this because this is their storage sales model. They also have storage reps who support the business partner AND sell directly to customers and, additionally, work as a technical overlay to the sales teams (client reps, other server specialists).

The very outstanding senior storage rep they have in Phoenix covers all major accounts (AMEX, Schwab, Honeywell, etc.), and they have another specialist who covers SMB and business partners. They also have an FTSS/SE who is very senior, having joined IBM in 2000 after being manager of the Data Center at Charles Schwab.

I'm not going to spend hours massaging my entry so forgive me if any of this is disjointed or doesn't make sense.

Bottom line, I very much appreciate your article - very interesting and informative. I am thrilled to hear of your perception of Compellent technologies because, though I'd heard good things, I'm not yet very informed as to their products other than that they've had advanced virtualization capabilities for some time.

I'm glad I was referred to your post by a Storage Group in LinkedIn that was discussing Dell's Compellent acquisition. Keep writing - I would love to hear more from you.
PermalinkPermalink 01/14/11 @ 14:18
John Harris

In response to: EMC FAST - Validating Intelligent Tiered Storage

John Harris [Visitor]
I think data progression based on ‘company’s Policy’ is good for the storage industry as it facilitates effective, utilization and optimization of storage resources and thereby reducing huge costs for the companies, the best part is its very much evolutionary; couldn’t agree more with you Tony–for sure it will have a significant impact on storage industry. Great post.
PermalinkPermalink 11/12/10 @ 06:09
Dan

In response to: Gluster - The Red Hat of Storage?

Dan [Visitor]
It seems if Gluster is going to support NFS and others they need to implement a Virtual IP solution as well. The idea of having NFS clients access a single management brick (as you can only assign one management console) - which means the client is going to a single IP. If that management brick goes down and the client is accessing the storage via NFS and not the native Gluster client, you are SOL.
It seems like a simple Virutal IP addition to the product would be nice.
PermalinkPermalink 10/31/10 @ 02:59

In response to: The Future of Storage

Tony Asaro [Member]
Hey Eric - well AoE is interesting. I certainly like the concept. And I think there are some great innovations in storage and there will be more over the next five years. AoE may well be a major innovation that incrementally improves things. However, I am not convinced that it will change the game for storage. Part of my point is that the market itself doesn't lend itself to massive change. Generally, customers of technology are not risk takers and will stick with what they already know. A storage startup with new innovations and technologies will take a 5, 7, 10 year journey to hopefully get to an IPO and/or acquisition. I think Data Domain on the D2D backup side is a game changer but we have no equivalent on the primary storage side. It is a harder problem to solve and is a "red ocean". Data Domain had a blue ocean. That isn't to say that there won't be another EqualLogic or 3PAR - which any startup would be pleased with. But from a market impact point of view - I don't see a game changer on the horizon. I do however believe the game can change in the NAS market but it won't be by building a "better box" or file system - we've seen that "movie" and we know the ending.

PermalinkPermalink 10/25/10 @ 09:58
Eric Slack

In response to: The Future of Storage

Eric Slack [Visitor]
Tony,
Kudos for creating a discussion about the future of SAN storage. I see your point, that the near term future of the traditional FC SAN will be one of incremental innovation. But what about other technologies that can be used in some of the same applications?
One that I think is really interesting is ATA over Ethernet (AoE). Pretty disruptive stuff.
More info here.
http://www.storage-switzerland.com/Articles/Entries/2010/6/8_Storage_Evolution.html

Thanks,
Eric Slack - Storage-Switzerland
PermalinkPermalink 10/24/10 @ 02:41

In response to: VMware Makes NFS Mainstream

Tony Asaro [Member]
And VMware has improved performance a ton since then as well. Not only in the overall storage stack but specifically with IP networks. They improved both the iSCSI and NFS stack.

PermalinkPermalink 07/20/10 @ 00:32
Gregg Dickson

In response to: VMware Makes NFS Mainstream

Gregg Dickson [Visitor]
Fair question. We currently still have Exchange and SQL Server on separate physical servers rather than VMware. And Exchange is clustered using MSCS.

We have been debating migrating the entire enterprise onto VMware but are undecided at this point. We are probably the right size to pull it off with less than 400 employees, moderate email traffic and no heavy OLTP. However, we had performance issues trying to run Citrix on VMware 3.5 supporting ~120 users a few years ago so there is some hesitation within the organization toward moving resource intensive apps/servers back into that environment.

Part of the issue back then (2006) was our lack of understanding of the horizontal scalability paradigm of VMware. We were only running a pair of VM's on each of a pair of ESX hosts. This config resulted in 25-30 concurrent users per VM during peak periods. That wasn't a problem except when a rogue process hung one of the VM's or consumed massive amounts of memory, which happened too frequently on that version of Citrix with the application mix that we were running. Knowing what we know now, we should have been running 4 or even 8 VM's on each server and it probably would have been a fantastic combination.
PermalinkPermalink 07/19/10 @ 16:50

In response to: VMware Makes NFS Mainstream

Tony Asaro [Member]
My first question is why do you need both? Yes - there are a limited number of vendors that support both. But if you are using VMware why use iSCSI given all of the challenges? I am curious to know - not in a challenging way.



PermalinkPermalink 07/19/10 @ 15:15
Gregg Dickson

In response to: VMware Makes NFS Mainstream

Gregg Dickson [Visitor]
We are hoping to find a single platform to provide both NFS and iSCSI in the 10TB capacity range. We also don't want to spend a lot of time managing, tweaking and tuning so that narrows the field quite a bit.

The leading candidates at this point are Compellent and NetApp. We are beating the bushes to find other players. We'll take a look at BlueArc and Isilon.

Thanks,
Gregg
PermalinkPermalink 07/19/10 @ 11:48

In response to: VMware Makes NFS Mainstream

Tony Asaro [Member]
Great point! I can't believe I missed SCSI reservation conflicts - which I heard from a number of IT professionals as an issue. Thanks for the feedback!

It would be interesting to see how you progress on the NFS front. Obviously NetApp provides a solution and I know that BlueArc and Isilon are focusing heavily on VMware as well. Which vendors are you looking at?

Tony

PermalinkPermalink 07/18/10 @ 20:43
Gregg Dickson

In response to: VMware Makes NFS Mainstream

Gregg Dickson [Visitor]
Great article Tony!
One more benefit of NFS that we learned the hard way when we lost 50+ production servers in a single instant.

No SCSI reservation conflicts!

Because of a bug in a well known vendor's storage monitoring software, we encountered a SCSI reservation conflict on our FC SAN which caused all of our vSphere Enterprise Plus hosts to lose connectivity to the SAN.

With help from VMware support we were able to recover the primary ~2TB VMFS volume that contained all of the VM images within about 2 hours.

As for the ~4TB of RDM's that contained all of our unstructured file shares, we had to recover from backup, a 48 hour outage in all. Not a pleasant experience!

Needless to say, we are planning to move to NFS. As you mentioned, many of the storage vendors aren't even aware of the issues with VMware and SAN storage architectures. They have that “deer in the headlights” look when we ask for an NFS storage solution for VMware.
PermalinkPermalink 07/16/10 @ 19:58
Vaughn Stewart

In response to: EMC Anti-Social Media Gang

Vaughn Stewart [Visitor]
It's too bad that the blogosphere has become so polarized. This is a small industry which we all work within, and I have concerns that the current tones may hurt one's ability to attract talent in the future.
PermalinkPermalink 07/07/10 @ 09:47
John White

In response to: Drobo Elite and Why It Matters

John White [Visitor]
I think that storage people aren't that thrilled about the Drobo because it's a consumer targeted product. In that space, I think the Drobo is a winner.

I don't think your parallel with VMware is a terrific one. That kind of virtualization abstracts an OS instance from the actual hardware it's running on. The Drobo doesn't quite do that. It provides storage redundancy (a good thing), makes the hardware array independent of specific drive sizes (a neat thing), and has software management which allows automatic upgrading of the array (also neat).

Ease of use and continuous upgrades are good design goals, but I think an enterprise storage guy is also worried about tiered performance, single points of failure, multi-chassis expansion, silent data corruption, enterprise apps, multiple host connections, filesystem support, data snapshots, raw drive access, intelligent cache utilization, enterprise support, etc.

Just ... different concerns.
PermalinkPermalink 03/25/10 @ 12:50
Tristan Rhodes

In response to: Drobo Elite and Why It Matters

Tristan Rhodes [Visitor]
I agree that Drobo has nice features. You can easily grow your storage without worrying about the typical problems and complexities. Just add larger and larger disks, simply by watching the blinky lights!

Have you looked at other vendors that have done this same thing? What are your opinions of them compared to Drobo?

It looks like Netgear's ReadyNas XRAID2 does the same thing:

http://www.readynas.com/?p=656

Also, it looks like QNAP does something similar as well.

http://www.qnap.com/pro_features_RLM.asp

Both of these products are much cheaper than the Drobo.

Thanks for you input!
PermalinkPermalink 02/12/10 @ 17:33
Edward

In response to: Drobo Elite and Why It Matters

Edward [Visitor]
EvilTed/Colin, I think you're missing Tony's point. Sure you can get better performance on your Qnap 809 Turbo, but the Drobo wins in ease of use.

Let's say you buy a couple of 1.5TB disks now since all you need is some added storage with RAID1. After a few months, you decide you need more capacity and 2TB drives have begun to offer the best bang-for-the-buck. You add a 2TB drive to your Qnap 809 and you'd have to re-build your array in RAID5 and then copy over all your files from your external back-up. With the Drobo, you stick in the drive and you can start using that extra capacity.

I haven't seen any other storage solution that does that, but if you find something, post it here.

I agree that the lack of support for ext4 sucks and it'd be great if they provided OFFICIAL support for Linux.
PermalinkPermalink 02/12/10 @ 06:22
Manu Gupta

In response to: Gluster - The Red Hat of Storage?

Manu Gupta [Visitor]
I understand that you have implemented High Availability solution using synchronous replication but it's not "automatic failover" where you perform automatic data consistency check as part of recovery process. Right?
PermalinkPermalink 01/04/10 @ 17:09
Anand Babu Periasamy

In response to: Gluster - The Red Hat of Storage?

Eli,
HDFS is a distributed object storage system with centralized meta data server. It is specifically designed for map-reduce framework and can only store large objects (64MB and above). For a general purpose storage, users are not willing to make changes to their applications to use HDFS APIs.

HDFS objects are stored as structured files on top of regular disk filesystems. You still need the meta-data to restore its objects. Data is stored in a format, proprietary to HDFS.

As your storage volumes grow from 10s of TBs to 100s of TBs, it becomes painful to recover from a crash. Filesystem check downtime can take from days to weeks. That is why, keeping the files and folders as is (similar to NFS), is very crucial to scalability.
PermalinkPermalink 12/31/09 @ 16:30
Max Cohen

In response to: Gluster - The Red Hat of Storage?

Max Cohen [Visitor]
Some stories about HDFS from wikipedia

------------
A filesystem requires one unique server, the name node. This is a single point of failure for an HDFS installation. If the name node goes down, the filesystem is offline. When it comes back up, the name node must replay all outstanding operations. This replay process can take over half an hour for a big cluster.[10] The filesystem includes what is called a Secondary Namenode, which misleads some people into thinking that when the primary Namenode goes offline, the Secondary Namenode takes over. In fact, the Secondary Namenode regularly connects with the namenode and downloads a snapshot of the primary Namenode's directory information, which is then saved to a directory. This Secondary Namenode is used together with the edit log of the Primary Namenode to create an up-to-date directory structure.

Another limitation of HDFS is that it cannot be directly mounted by an existing operating system. Getting data into and out of the HDFS file system, an action that often needs to be performed before and after executing a job, can be inconvenient. A Filesystem in Userspace has been developed to address this problem, at least for Linux and some other Unix systems.
---------------

Now as you read this it is outrageous to have a distributed filesystem with a single point of failure and also more
ridiculously replaying the whole of other calls which it is claimed to be of 1hr. Now here is a funny question is the HDFS ever installed on a 1000clients? did they tried replaying calls from that? i wouldn't be surprised at the very first approach. Also you can't mount HDFS as a normal filesystem now that is even strange this is what i think Gluster folks tried to tell that its not even Posix Compliance now Yahoo! uses this just becoz they didn't have any solution so they built their applications around this with HTTP get, put requests and even strange to that it is mentioned that you would need a userspace filesystem access files from HDFS.

All in all Hadoop is far cry even from calling themselves as a filesystem. Lustre is far better compared to hadoop in many cases as it feels to be a filesystem per se. But again lustre has
same problems of single metadata concept. I am not sure why people cannot see that pointing fingers and writing code to handle meta data is just stupid as the backend filesystems have done this job amazingly over the years.

MogileFS came by some promise but their performance sucks and have several design considerations.



PermalinkPermalink 12/31/09 @ 15:11
Eli Collins

In response to: Gluster - The Red Hat of Storage?

Eli Collins [Visitor]
Correction: Hadoop's distributed file system (HDFS) does not store data in a proprietary format. Files are stored in blocks as regular files and directories.

PermalinkPermalink 12/30/09 @ 14:37

In response to: Gluster - The Red Hat of Storage?

Tony Asaro [Member]
Steve - I think that we actually are saying the same thing - it is an issue of how long the status quo remains dominant. Remember that people have been predicting the demise of Unix for 10 years now. But actually IBM sold over $6 billion worth of Unix servers, Sun sold over $4 billion and HP sold over $4 billion in 2009 - so I think it is far from dead. Unix will be around for a very long time.

The same will be true for traditional storage - it will take years for people to completely make the shift. In that time - could a true open source storage system have a major impact on the market? I believe the answer is yes.

Regardless, I do think that GlusterFS may be the start of something that is exciting. But it is a long road with lots of cool milestones and challenges on the way. To take a page from your recent blog on why startups fail - http://tinyurl.com/ybnv6u4 - in addition to needing a solid product they need great marketing.



PermalinkPermalink 12/29/09 @ 18:35
Steve Duplessie

In response to: Gluster - The Red Hat of Storage?

Steve Duplessie [Visitor]
Interesting. I don't know much about them, but I like the model comparison. Red Hat makes PILES of dough by supporting their open source software - and gets the leverage of a free global development effort. Wanna know who else makes piles of dough off of that same Red Hat code? Oracle. Same reason - people trust Oracle to support the stack.

I would argue that the storage world will stay vibrant - as the Unix world has - The Unix world is dying every day for mainstream applications - really only Solaris remains and for how long? As people flock from Unix they go to either Microsoft (gasp) or to Red Hat (and to a lesser degree, Novell) but either way, they are leaving.

The same will eventually be true in storage. Heavy weight OS type functions embedded in a storage controller are the same thing as MPE in an HP PA-RISC system 20 years ago - bloated, hard to support, and have diminishing value to customers. Removing the voodoo and opening up these functions has a history of working, so I figure it's just a matter of time.

Only question in my mind is how long will it take?

Cheers
PermalinkPermalink 12/29/09 @ 16:52
Liem Nguyen

In response to: EMC FAST - Validating Intelligent Tiered Storage

Liem Nguyen [Visitor]
Right, we're not changing our DNA. We just want to make sure we have the products, services and team in place that can support customers of all sizes. That's how we'll grow. As you pointed out, there are 100s, if not 1000s, of midsized enterprises that have good choices to make for storage. Of course, we'd love it if they'd choose us. :)
PermalinkPermalink 12/11/09 @ 11:40

In response to: EMC FAST - Validating Intelligent Tiered Storage

Tony Asaro [Member]
Liem - Let me make a correction to my position - while I agree that you should go after large Enterprises - you should compete for the midrange applications and not go after the high-end stuff. In other words, compete for the 100s of CLARiiONs that these customers have versus the handful of DMX systems.

Tony
PermalinkPermalink 12/11/09 @ 11:20
Liem Nguyen

In response to: EMC FAST - Validating Intelligent Tiered Storage

Liem Nguyen [Visitor]
Agree, great discussion, Tony, thanks for starting it! Compellent naturally has ambitions to grow our company, and the storage market is big enough for Compellent to share with EMC and 3Par. :-)

Enrico and Tony are right - federation of arrays is something we’ve discussed; we realize a super SAN structure like that offers a lot of advantages for end users. Though Compellent continues to focus on the midsized enterprise, we also recognize large enterprises can benefit from our features. We’ll definitely continue to expand usage models and functionality for automated tiered storage. A persistent, modular architecture—which is what we base Storage Center on-- provides a lot of flexibility in the way it supports new SW and HW tech as they emerge (Infiniband, FCOE, 10GbE, etc). That’s why we’ve been able to add support for SSD with automated tiered storage without having to introduce a new model. We’ll have more info to share over in 2010.
PermalinkPermalink 12/11/09 @ 10:59

In response to: EMC FAST - Validating Intelligent Tiered Storage

Tony Asaro [Member]
Barry - good discussion.

Well first I said that Compellent \"could\" reach a billion - not saying that they necessarily will. In other words, the midrange market is such that there is certainly enough market opportunity for them to achieve this with the capabilities that they have today.

There is no silver bullet, new feature or capability that will change the landscape for them - and I am saying they don't need it. Certainly the things that Enrico suggested would not add that much to their top line. Rather, their continued success requires business execution year after year after year.

Additionally, going after higher end environments would most likely detract from their current go-to-market focus and hurt them.

Remember that features may appear similar but often what is comparable on paper is not so in the real world. Additionally, it is the overall experience of working with the product and the vendor that also matters. Compellent has an excellent storage system - it is feature-rich and it is very easy to use. And that ease of use extends over its life cycle - which is very important.

Compare LeftHand and EqualLogic. Arguably, LeftHand had some advanced technologies that EqualLogic did not - such as a true clustered n-way architecture - one of the very things that Enrico is talking about. And yet, EqualLogic executed on their business much more successfully than LeftHand.

At the stage that Compellent is at - the MOST important thing for them is executing on sales with laser-like focus. They need to become a machine. And perhaps they have achieved this already - I haven't spoken to them in some time. But that is more important than feature-creep.

In any case, my blog wasn't about Compellent. But it is a good discussion :)
PermalinkPermalink 12/10/09 @ 18:59
the storage anarchist

In response to: EMC FAST - Validating Intelligent Tiered Storage

Really? A $BILLION?

Even if (when) the established vendors implement similar features? I mean, FAST (et al) is just another storage feature that everyone will (evenutally) support, much as is thin provisioning or VMware integration - right?

Or are you saying that it's break-away differentiation and Compellent enjoys a defensible market segment?

Seriously - I'm curious.
PermalinkPermalink 12/10/09 @ 16:31

In response to: EMC FAST - Validating Intelligent Tiered Storage

Tony Asaro [Member]
Enrico,

It is interesting - I had a conversation with them about what you propose maybe four years ago. And their answer was - "why". Yes, all of the things you mentioned can add value but it doesn't impact the lion share of the market. Especially not the market that Compellent is going after.

One could argue that things have changed over the last four years and there is more of a need for a scale out or federated architecture. But I don't believe that is true for them. I believe that Compellent can reach a billion dollars in revenue with the solution they have.

IT vendors have to be careful and not move towards every shiny object that presents itself. Instead, they need to understand the requirements of the majority of market.

Additionally, they need to understand what battles to fight - going after the large Enterprise customers is not what I would recommend.
PermalinkPermalink 12/10/09 @ 13:29
Enrico  Signoretti

In response to: EMC FAST - Validating Intelligent Tiered Storage

Enrico Signoretti [Visitor]
Tony,
I think Compellent needs to go further and develop a federated infrastructure with the ability to move LUNs between arrays, Load balancing and horizontal scaling to get a big enterprise system with standard building blocks managed as a single one!

Enrico
PermalinkPermalink 12/10/09 @ 06:24

In response to: EMC FAST - Validating Intelligent Tiered Storage

Tony Asaro [Member]
Thanks Marc. I am not sure that Compellent needs to do much more innovation around Data Progression unless they are going to go after the bigger customers. I think for their sweet spot they are in good shape. Thanks for the feedback, brother!

Tony
PermalinkPermalink 12/10/09 @ 00:04
marc farley

In response to: EMC FAST - Validating Intelligent Tiered Storage

marc farley [Visitor]
I agree, well said. It's no slight to Compellent to say that FAST from EMC puts online data movement in the mainstream. Its also clear that Compellent will need to continue to upgrade Data Progression if they want to continue to claim leadership in this category.
PermalinkPermalink 12/09/09 @ 11:57
the storage anarchist

In response to: EMC FAST - Validating Intelligent Tiered Storage

Well said.
PermalinkPermalink 12/09/09 @ 10:42

In response to: Drobo Elite and Why It Matters

Tony Asaro [Member]
Tom - I don't know Colin's specific situation or the users he is referring to but I like your analogy.

Thanks,

Tony
PermalinkPermalink 12/02/09 @ 15:55
Tom

In response to: Drobo Elite and Why It Matters

Tom [Visitor]
Jeez Ted!

So someone had an accident because he did walk over the street while the lights where red now they shout at the lights?

Its the same with Drobo: If you ignore Drobo (and any other thin provisioned storage for that matter) telling you that you should add capacity and you refuse to do so you shouldn't cry if you loose data.

Seen stuff like this in bigger companies with SANs in the 7 figures price range.
PermalinkPermalink 12/02/09 @ 09:40
Vikki

In response to: Blah Holidays

Vikki [Visitor]
With the current state of the economy, this cartoon rings more true than ever.
PermalinkPermalink 12/01/09 @ 23:37