The Adobe® Digital Enterprise Platform (ADEP) Experience Server supports WAN clustering (important in high latency situations and given distributed infrastructure), hot cluster join (allowing you to expand infrastructure on the fly), and runs in a very small memory and CPU footprint. This makes the Experience Server suitable for deployment in the cloud, whether actual deployments are done there or on premise.
In pursuing interaction patterns, ADEP starts its approach with mobile devices (particularly tablets) and then expands to consider other environments. ADEP can detect over 17,000 devices, enabling content contributors to understand exactly what experience will be delivered to segmented content consumers via device emulation support. ADEP presents the concept of device groups to reduce the complexity and managing the diverse range of never-ending devices and device types.
Today’s customer increasingly leverages social activities to gain validation of their decisions and to share them with others. ADEP supports a range of social capabilities including support for local communities and the ability to glean information from public communities (Facebook, Twitter, etc.) and use that information to tailor the customer experience. Social capabilities in the platform are much like the public social environment: they surround everything we do and are available for use at any time for any purpose.
You build applications for the cloud with on premise in mind,
You build applications for mobile with desktop in mind, and
You understand that every user is a contributor and has a social graph.
 i.e. ADEP Experience Server is “cloud ready”
 Adobe’s Customer Experience Solution for Web Experience Management (previously known as CQ5), leverages the WURFL device description repository.
On Monday EMC announced Atmos. While I was in flight, returning from a short vacation, a number of my EMC colleagues blogged about this new offering. I’d like to draw your attention to their posts as follows:
Steve Todd first offers, among other things, a concise definition of Cloud Optimized Storage (COS): “global storage with a policy focus.” He continues in a follow-up post by delving into more details concerning the “special sauce” of Atmos (i.e. its use of global policies). I have the strong sense that more posts will come from Steve regarding Atmos in the not too distant future, too.
Barry Burke: “Atmos seeks to blaze a new approach to “cloud” storage (oh how I hate that term), to create a global storage platform that is not only cost-effective to install and grow, but extremely efficient to operate as well…set-it-and-forget-it cloud storage. And trust me, if your business thinks in petabytes or even exabytes of unstructured data, you’re already looking for a totally new storage paradigm, because nothing – and I mean NOTHING – built on current commercial file systems or databases will handle that kind of storage.”
After some humorous precursor posts, Mark Twomey dives into Atmos by relaying his conversation with one of the Atmos architects, Dr. Patrick Eaton, who was also involved in OceanStore.
Dave Graham talks about what Atmos is and is not and then covers the underlying architecture of Atmos.
Last but not least, Chuck Hollis provides his perspective on Atmos, drawing in other commentary from the web in the process.
During the first keynote of PDC last week, Bob Muglia associated this year’s PDC with the 1992 PDC, which featured the coming out of Windows NT. (I still think of “WNT” as “V++ M++ S++”, given David Cutler‘s leadership on both operating systems.)
I think there is much to draw from this comparison where the future for Windows Azure is concerned.
In 1992, PDC’s coverage of NT followed after at least a two-year effort to develop the new operating system (e.g. the OS first ran–minimally and non-commercially at Microsoft–in 1990). Commercial availability followed PDC a year later (i.e. Windows NT 3.1). However, adoption of NT didn’t take off until 1996, with the release of Windows NT 4.0 (and the availability of hardware and applications necessary to accomplish day-to-day work).
I’m not saying that Azure’s “take-off” won’t be until 2012 (i.e. “Red Dog”‘s 2006 commencement plus six years). Yet, Microsoft’s own comparison of Azure to NT is helpful in combating both near-term tendency to hype and in understanding the long-term potential of the cloud where Microsoft envisions it to be.
Certainly the vision of Windows Azure (aka “Red Dog”) and the Azure Services Platform is substantial. However, in order for Microsoft, its partners and customers to realize it, it must deliver business value.
Internal or external, cloud computing has to address a set of real business problems in order to become a relevant part of one’s development arsenal. Some business models are more closely aligned with the cloud than others. New business models will emerge.
I guess that the technology industry is tired of TLAs like MSP and ASP. In fact, it seems like FLAs like SaaS and PaaS are passé, too. Only five characters will do, and analogy has replaced acronym: cloud.
During the keynotes this morning, Ray Ozzie suggested that cloud (or utility) computing is materially different than past innovations upon which it rests since it is focused on the externalization of IT and the critical requirement to scale-out.
According to Gartner, there are five trends driving companies like Microsoft and Google in their march toward cloud computing as follows:
Software as a service
Web 2.0 products, such as collaborative technologies, social networking and wikis
Consumerization of technology
Global class, a new way to deliver computing services
So I’m looking for content and discussion concerning cloud computing the addresses the following questions:
What are the API differences between this OS (Windows Azure) and a traditional Windows OS (e.g. Windows Server 2008)? What features/functions are unique to Azure (and why)?
What about composition in the cloud?
What about cross-app-in-the-cloud functionality (e.g. events and other synergies)?
What are the significant ISV/partner opportunities (e.g. platform level, application level and integrated solutions level) created by the “Azure ecosystem”?
What new issues arise in the cloud? Regulatory compliance cannot be compromised. Comingling of both live and backed-up data can pose concerns. “Premise matters” (eventually); so virtualization, geography, sovereignty, etc. can pose additional concerns. Etc. How does Azure address such concerns?
In a few minutes, I’ll be taking an initial “lap around” Azure, which should be interesting. Stay tuned…