Comment from: Trey Tharp [Visitor]
Trey TharpI feel the reason foe lack of mass adoption is the performance impacts and scalability. I feel Data Domain is probably one of the fastest dedupe players and they get data in a perfect way, sequentially with a static block size. When you look at 80% and higher random workloads of varying block sizes I see where it could be a challenge.

I'm looking at primary compression being available before dedupe, but that's just my opinion.
06/30/11 @ 18:29
Comment from: Tony Asaro [Member] Email
Trey - agreed that both performance and scalability are barriers. I also agree that Data Domain works well as a backup target and that primary I/O is very different and that is why DataDomain will never find its way into primary. I also agree that data compression is easier to implement with primary than dedupe. It appears I agree with everything you said!

However, I do not think performance and scalability are insurmountable issues. Especially if you talk to the Permabit guys - they say they have an architecture that conquers both so it will be interesting to see one of their OEMs bring their dedupe to market.

Additionally, most data is dormant within a very short window afters its creation. And processors and memory keep getting faster and faster.

I also believe data compression is valuable but even more so when you combine it with dedupe.

I am convinced that it is inevitable and that it will become pervasive- it is an issue of time but I believe we are close.

07/02/11 @ 10:07

Leave a comment


Your email address will not be revealed on this site.
(Line breaks become <br />)
(For my next comment on this site)
(Allow users to contact me through a message form -- Your email will not be revealed!)
« HDS and BlueArc Finally Tie the KnotNirvanix: Cloud Storage for the Enterprise (For Real) »