Cloud markets and networking layers

Let’s look at the software defined networking space. It’s one of the buzzwords in the market you’re hearing more about, along with network virtualization, network function virtualization (NFV) and things like Overlay versus OpenFlow.

CoheisveFT SDN layer cakeYou can think about the SDN market and the feature and functions set as existing above and below this line of access and control.

Or you can say the boundary is defined by service provider infrastructure. You, the customer and your applications run on that infrastructure that the service provider offers. CohesiveFT Layer Diagram SDN Market

So when you look at the service provider layer, the big noise is driven by the energy around the emerging OpenFlow standards and the Nicira acquisition by VMware.

And what that has focused the market right now and in the immediate future not on the customers’ ability to control and deliver applications, but has been about helping service provider deliver a better cloud. Service Provider SDN is about “help me deliver a better cloud. Help me run my data centers as a global service provision business.” All those things in the infrastructure are good for customers, but not directly. There are really only indirect benefits in Service Provider SDN for most companies.

There’s an incredible amount of innovation going on below the line, and service providers are growing with better service and federation. This growth is all part of the compelling reasons why cloud infrastructure is growing and growing and assuming such a dominant part of our landscape.

When we go above the line into the world of you, the customer, this is where your applications reside. You want to to deliver your applications to the cloud. You want to be in control of those applications.  So you tend to have solutions like ours, VNS3.
VNS3 is an instance-based solution.  You run VNS3 instances as part of your overall cloud topology. You can run these as virtual routers, switches, firewalls, protocol re-distributors.  They become a part of your application topology similar to how your networking devices in your data center work with your network at an individual row or rack layer.
So, you’re moving a network to the cloud with your application, not necessarily the network as a whole.
In terms of the SDN topic one thing to note – we’ve been doing this since October 2007. We have an extraordinary depth of insight and customer insight with our customer experience in networking and the cloud. We’ve driven about 60 million device hours with some of these use cases you’re about to see. And about 90% of our customers are running production loads in the cloud, where their ability scale their business is based on our ability to deliver to them.

What retailer BCBGMAXAZRIA learned about cloud security, SIEM

The following is an excerpt of the recently released case study on how a major retailer, BCBG, migrated to a cloud security platform and discovered how SIEM and Log Management capabilities enhanced their abilities for enterprise security. For the entire case study, you may download a PDF version here. (direct; no forms to fill out!)

There was a time the only security issues retailers needed to be concerned with was theft. Put a guard in the store and a couple of video cameras and prevent as much loss as possible. Those days are long gone.

The overall security of a retail organization has grown increasingly complex. The smash and grab has been supplanted by the hack and breach.   A retailer’s IT environment is at as much risk as the product on the retail shelf. Every year hundreds of retailers fall victim to electronic intrusion. Ask Raley’s, Zaxby’s, Mapco, Michaels’s and dozens of retailers about how their POS and other exposed systems were not only breached, but cost them untold millions of dollars in stolen customer credit cards, abused sensitive data, and reparations and fines.

But this is not a lesson on the failings of retailers. That would be unfair. The issue is considerably multifaceted, especially when trying to mitigate risk. For any company, including retailers, risk must be balanced between budget, available resources, recognized vulnerability and maintaining compliance. For each company the answer is unique, but there are certain realities any retailer should consider to better protect customer assets…and do so without compounding costs, burdening infrastructure resources and taxing manpower.

Here’s how one retailer overcame similar challenges by incorporating CloudAccess solutions.

Like most retailers of its size, BCBG (BCBGMAXAZRIA) was challenged with a variety of security issues; a great many devices across the country creating vast amounts of log data that overwhelmed existing IT resources. On top of that, there were questions about levels of POS exposure across a substantial distributed network (which necessitated the use of internet resources), regulating and monitoring online account access and internal threats.

According to CIO Nader Karimi they understood that firewall, malware detection, and weekly system-log reviews were generally enough to satisfy compliance (PCI DSS) audits, but because of unseen vulnerability gaps BCBG was still at risk from external breach and internal intrusion. They simply did not have enough visibility to see what was happening to their prolific network assets and not enough eyes to catch issues amongst a sea of data to find the problems quickly enough to solve them. This was not due to lack of talent or effort, but simply an issue of being stretched way too thin (like most other companies).

BCBG recognized they needed to address the issue:

With limited data center space and budget, we didn’t want to deal with all the hardware, redundancy and backup.  We also had limited manpower here internally; so we couldn’t keep up with all the security trends and fine-tune the rules on a daily basis,” said Karl Ma, BCBG’s Senior Manager of Global Information Security. Listen to Karl Ma’s entire video review here

CloudAccess’ cloud-based security and security-as-a-service offering solved several of their problems by:

1.    Instituting real time monitoring 7/24/365
2.    Incorporating data from all POS and register devices
3.    Creating alert process via trans-enterprise correlation
4.    Reducing costs, yet expanding capability
5.    Streamlining compliance reporting
6.    Transitioning to a proactive defense without adding resources

Risk is rising…especially for retailers. From every survey, report and anecdote, security issues are becoming a more significant and louder talking point.  Everything from the introduction of new technologies, the morphing of infrastructures beyond network perimeters, to threats of breach, shadow IT and other internal vulnerabilities indicates that managing a firewall and filtering email is not enough.  Security must be woven into the fabric of the modern retail enterprise. As retail moves to the cloud, so must retail security.  You can add every security solution to the internal network, but that will not stop threats that target e-commerce and other public facing resources. It isn’t a local issue anymore, so security can’t be just local either.

And as BCBG discovered, through the cloud, compliance is easier, enhanced security best-practices are more than affordable (low TCO, high ROI), and security-as-a-service has proven to be considerably less intrusive on resources. According to Karimi, “BCBG is now more confident in protecting the personal and sensitive data entrusted to us by employees, partners, vendors, suppliers, and especially customers.”

Why do you need standardization of SLAs in multi cloud environment

serviceGuaranteeMany technical experts believe that cloud computing is capable of reshaping the entire ICT industry in a revolutionary manner. With the introduction and development of cloud computing, the two business entities that emerged include – Cloud Service Providers and Cloud consumers.

Although the consumers of cloud services do not possess much control over the primary computing resources, but it is quite essential for the cloud consumers to obtain necessary guarantees in terms of service delivery standards.

These guarantees are usually provided through SLA (Service Level Agreement) which is negotiated between service provider and cloud service consumer. One of the major requirement of a well defined SLA is its standardization.

It must possess apposite level of granularity, namely the trade offs between complicatedness and expressiveness, so that most of the consumer expectations are covered. An SLA must also be relatively straightforward to be verified, weighted, evaluated.

You cannot rush in the process of developing  SLA between the cloud service provider and cloud consumer. There are many aspects that need deep delved consideration. One such aspect is standardization of SLA in a multi cloud environment. It is an imperative segment that must be given utmost importance.

While SLA can be considered a contract or agreement between a cloud service provider and a cloud customer which may be an enterprise, a government agency or a business, but the intensifying value chain for the other services have made SLAs quite significant for a myriad of multifaceted and intricate relationships that they partners share.

There may be overlapping in the service providers. Like the same service provider may indulge itself in providing cloud based services to vendors, businesses, government agencies, large enterprises, SAAS end users, IAAS infrastructure specialists and PAAS developers etc.

Also the same vendor may indulge in providing services to network providers, cloud service providers, web service providers, businesses, enterprises and government agencies etc. Since there can be innumerable  relationships possible with huge amount of terms & conditions for the relationships smooth execution, the standardization of the SLA appears to be a great solution.

There are numerous businesses and individuals who get connected with the SLAs on the basis of the services that they provide. There can be different types of relationships that a business or a service provider has indulged into. There can be many to many, one to many and many to one relationships.

For example, in the context of pre-disaster planning, a requesting service provider is required to figure out that whether reserved resources would be available through a substitute provider or not. Here the concept of many to many, one to many relationships come into play. In order to arrange for the reserved resources for emergency or catastrophic situations, a requesting provider can enter into the relationship with one or more than one substitute services provider.

A substitute service provider is a kind of cloud service provider only, who agrees to provide data transfer services to a requesting service provider, so that it can fix load balancing or outage issues at its end.

Hence, in order to compete successfully, companies must manage their quality of services proactively. Since the provisioning of such services is entirely dependent on the multiple partners, the SLA management becomes a critical and vital factor for achieving success. Also the standardization of terminologies in the SLA among the various partners is also quite imperative. If the terminologies are not standardized then it may be possible that the definition of a terminology used by a partner (let’s say a requesting service provider) may differ from the definition that is used by another partner, a vendor or a substitute provider. There is also the possibility that partners may define the parameters of SLA in dissimilar ways.

When these differences are not addressed during a negotiation, they might impact the process of SLA. The issues like what is the penalty and when should the penalty be imposed may come up.

Therefore, standardization of the SLAs is the basic and fundamental requirement in order to run the relationships smoothly. It helps in providing significant input to the strategic plans., helps in optimizing the end to end business processes, enhances customer support and produces gains in the cost efficiency.

How to architect scalable applications in the Cloud

solution designInternet came with many new concepts which mankind uses for its comfort, enjoy, approximately for everything. Nowadays it’s easy to reach our family friends which are sitting far away with latest social networking applications.

But it’s just the user side we know, for an application developer there are whole lot of challenges like managing fluctuating internet traffic to his application or website as we are reaching new scalability limits. Sometimes traffic jumps from thousands to millions so system developers and architects have to scale its application by taking benefit of cloud resources.

Cloud provides quick resource allocation and de-allocation on unsystematic demand and this feature of cloud makes it perfect for scalable applications. That’s not it, all phases of a application can be accommodated by cloud’s resource and infrastructure.

Scalable architecture allows us to test our application under real world and scale according to the requirement. Unpredictable traffic can put whole new level of trouble to system in every way. A scalable application adapts to these vigorously changing environments and promotes trustworthiness and availability of a service.

Following is a reference model with the help of which, you can architect scalable applications in Cloud:

Scalable Architecture Reference

It is similar to the three tier classic web architecture. It has a cashing tier that comes between application server and the database and few more changes are there with which it becomes capable of handling scalable application. It starts with DNS connected to load balancers which are further connected with an array of applications. Servers are connected to cache and database.

Load Balancing Tier

It is the first tier in the reference model. It needs two load balancers as in case of server failure to give redundancy. These two must be placed in different zones with dissimilar network and power connectivity, which increases application’s availability and reliability. In the application’s early life phase two load balancers are enough and as users increase then we can add more load balancers. Another option is ELB (Elastic Load Balancer) which is only available in Linux platform. It mechanically scales to hold increased load.

 

Application Tier

Scalable server array is associated with this second tier of the reference architecture. In initial stage this tier is configured with two servers (in different availability zone) and automatic scaling alert mechanism in place with instance specific metric. System load, free memory, CPU idle are the most general metrics used for auto scaling.

When threshold level mentioned by metrics are met a conventional alert is initiated and auto scaling gets started. The up or down direction of scaling depends on. For cost cutting in the early stage of application’s life cycle, front end load balancers may be united with application servers to save the cost & expenditure on infrastructure and then after some time these can be segregated.

The best way is to scale up and down conservatively, for example: when upward trend in users is detected you must initiate additional instances prior to when they are required.

 

Caching Tier

Its aim is to increase performance. But it cannot be used for every application, for example: read concentrated applications can get big performance gains as it reduces processing time and accessing of data , write concentrated applications might not get as good gains.

Mostly cache uses a lesser amount of CPU but uses large memory. You must use big instances (memory) for servers in this tier. In the early lifecycle phase of an application, cashing requirement is less so u might need to use only one instance to give cache for whole application server. In normal conditions or better condition you must increase instances.

A single caching server could go off at any point which will put lot load on performance of application. So you must use minimum two instances in cashing tier on different zones. As the usage increases a buffer of extra caching capacity must be added. When cashing servers increase hashing algorithm is used by application server to map to correct cashing server.

Another feature that is used in this tier is TTL. Time to live (TTL) it is used in cashing server to time out the saved data. This feature allows us to available memory for new data and deletes inconsumable memory. For applications with less amount of cashing needed can co reside cashing on application server as this will also save cost.

 

 

Database Tier

As the name suggests this is the final tier that contains database. There are numerous kinds of database applications in a web application but open source and most common is MYSQL. It is best practiced in one or more slave database. In a cloud, resources are physically inaccessible and hardware are susceptible for failure, therefore for these reasons we need more than one slave, for example: if we master fails other slave can take his position.

Instance might unexpectedly fail and whole database might get lost. Although slaves can help in data loss but sometimes data which is not transmitted to slave might get lost. Elastic Block Store (EBS) volumes are really amazing, as with these volumes, in case of instance failure data won’t be lost. It efficiently makes backup and restore.

MySQL’s binary logs are used for enhancing I/O performance. You should write Bin logs to a disk or instance’s ephemeral drive. By separating I/O physically from Bin log I/O , this improves overall efficiency of subsystem of database disk by using database storage for database only.

XFS file system is another method in which we can take snapshots of the database periodically. At the time of snapshot it freezes the file system. Snapshots are uploaded in a persistent and distributive storage infrastructure for recovery, backup and archival purposes.

In the early phase of application’s lifecycle we  only need small instances for both slaves and master memory and processing power but as traffic increase , instances can move from small to large instance size.

There are many approaches that can be used to take off load and increase efficiency as per the need of application: MYSQL proxy , Database Sharding, master-master solution, Relational Database Service (RDS). RDS is now used in many web applications. There is another one called NoSQL, it for those which don’t require functionality like RDBMS

 

So now you know what is required to build a scalable application and by employing these steps & tiers you can architect perfectly scalable applications in cloud.

 

Getting Real with Ruby: Understanding the Benefits

By Jennifer Marsh

Jennifer Marsh is a software developer, programmer and technology writer and occasionally blogs for Rackspace Hosting.

Ruby is an advanced language for many programmers, but it’s a powerful language used to make dynamic interfaces on the web. Dynamic web hosting shouldn’t be taken lightly because security holes still exist. A good cloud web host will offer a safe environment for development while still offering scalability and usability for Ruby programming, testing and deployment.

Space for Testing and Development

Web applications can grow to several gigabytes. For newer Ruby developers, it’s helpful to have enough storage space for backups, so a backup can be made to support the deployed code changes. Ruby is an interpreted language, but a bug can still mean a lot of time and resources devoted to discovery and fixing. Instead of emergency code reviews, the developer can restore the old version of the application before troubleshooting bugs.

Support for Database or Hard Drive Restoration

In severe cases, the application corrupts the data stored in the database. A good web host will backup the database and then restore it when the site owner needs it restored. This is especially useful in emergencies when the site gets hacked or data is corrupted due to application changes or hard drive crashes. The web host should support the client, including in cases of restoring database and application backups.

Find Support for Ruby

To run Ruby, the web host must support the framework. Check with the hosting company, and verify the host allows execution of CGI files. A good way to check is to find a host that has FastCGI and specifies that it supports Ruby and Ruby on Rails. Ruby is typically supported by Linux hosts, but some Windows hosts will support Ruby. Ruby is an interpreted language like Java, so it can run on any operating system.

Ask for Shell Access

Ruby configuration can be a bit hairy to configure. If the programmer is familiar with the language, having shell access helps speed up application configuration. Not all hosts will offer shell access, but with extended or advanced service, most hosts will oblige the webmaster. Shell access gives the webmaster more control of the Ruby settings.

The most important part of a web host is customer support an up-time. Most web hosts have a contract with the client that promises a percentage of up-time. This should be around 99%, meaning the website will be up for visitors. Check with the host for contract specifics before purchasing cloud hosting for Ruby.

The Evolution of Single Sign-on

Replacing mainframes with 21st century identity

By Paul Madsen, senior technical architect

The concept of single sign-on (SSO) is not a new one, and over the years it has successfully bridged the gap between security and productivity for organizations all over the globe.

Allowing users to authenticate once to gain access to enterprise applications improves access security and user productivity by reducing the need for passwords.

In the days of mainframes, SSO was used to help maintain productivity and security from inside the protection of firewalls. As organizations moved to custom-built authentication systems in the 1990’s, it became recognized as enterprise SSO (ESSO) and later evolved into browser-based plugin or web-proxy methods known as web access management (WAM). IT’s focus was on integrating applications exclusively within the network perimeter.

However, as enterprises shifted toward cloud-based services at the turn of the century and software-as-a-service (SaaS) applications became more prevalent, the domain-based SSO mechanisms began breaking. This shift created a new need for a secure connection to multiple applications outside of the enterprise perimeter and transformed the perception on SSO.

ping-cloud1Large-scale Internet providers like Facebook and Google also created a need for consumer-facing SSO, which did not previously exist.

Prior to these social networks, SSO was used only within the enterprise and new technology was created to meet the demands of businesses as well as securely authenticate billions of Internet users.

There are many SSO options available today that fit all types of use cases for the enterprise, business and consumer, and they have been divided into three tiers—Tier 1 SSO being the strongest and most advanced of the trio. Tier 1 SSO offers maximum security when moving to the cloud, the highest convenience to all parties, the highest reliability as browser and web applications go through revisions and generally have the lowest total cost of ownership. Tier 2 SSO is the mid-level offering meant for enterprises with a cloud second strategy. Tier 3 SSO offers the least amount of security and is generally used by small businesses moving to the cloud outside of high-security environments.

The defining aspect of Tier 1 SSO is that authentication is driven by standards-based token exchange while the user directories remain in place within the centrally administered domain as opposed to synchronized externally. Standards such as SAML (Security Assertion Markup Language), OpenID Connect and OAuth have allowed for this new class of SSO to emerge for the cloud generation. Standards are important because they provide a framework that promotes consistent authentication of identity by government agencies to ensure security.

These standards have become such a staple in the authentication industry that government agencies like the United States Federal CIO Council, NIST (National Institute of Standards and Technology) and Industry Canada have created programs to ensure these standards are viable, robust, reliable, sustainable and interoperable as documented.

The Federal CIO Council has created the Identity, Credential, and Access Management (ICAM) committee to define a process where the government profiles identity management standards to incorporate the government’s security and privacy requirements, to ensure secure and reliable processes.

The committee created the Federal Identity, Credential, and Access Management (FICAM) roadmap to provide agencies with architecture and implementation guidance that addresses security problems, concerns and best practices. Industry Canada’s Authentication Principles Working Group created the Principles for Electronic Authentication which was designed to function as benchmarks for the development, provision and use of authentication services in Canada.

As enterprises continue to adopt cloud-based technologies outside of their network perimeter, the need for reliable SSO solutions becomes more vital. Vendors that support these government-issued guidelines offer strongest and most secure access management available today. Since the establishment of SSO, the technological capabilities have greatly advanced and SSO has been forced to evolve over the past few decades. First generation SSO solutions were not faced with Internet scale or exterior network access, whereas today’s SSO is up against many more obstacles.

As IT technology progresses in the future, SSO will have to grow with it and strengthen its security. For instance, while SSO is the expectation for web browser applications, the emergence of native applications (downloaded and installed onto mobile devices) has hilted the necessity of a similar SSO experience for this class of applications. To address these new use cases, new standards (or profiles of existing standards) are emerging and initiatives like the Principles for Electronic Authentication will have to adapt accordingly in order to offer the best guidance possible.

MaaS applied to Healthcare – Use Case Practice

e-health0620MaaS (Model as a Service) might allow building and controlling shared healthcare Cloud-ready data, affording agile data design, economies of scale and maintaining a trusted environment and scaling security.

With MaaS, models map infrastructure and allow controlling persistent storage and deployment audit in order to certify the at data are coherent and remain linked to specific storage. As a consequence, models allow to check where data is deployed and stored. MaaS can play a crucial role in supplying services in healthcare: the model containing infrastructure properties includes information to classify the on-premise data Cloud service in terms of data security, coherence, outage, availability, geo-location and to secure an assisted service deployment and virtualization.

Introduction
Municipalities are opening new exchange information with healthcare institutes. The objective is sharing medical research, hospital acceptance by pathology, assistance and hospitalization with doctors, hospitals, clinics and, of course, patients. This open data [6] should improve patient care, prevention, prophylaxis and appropriate medical booking and scheduling by making information sharing more timely and efficient. From the data management point of view it means the service should assure data elasticity, multi-tenancy, scalability, security together with physical and logical architectures that represent the guidelines to design healthcare services.

Accordingly, healthcare services in the Cloud must primarily secure the following data properties [2]:
-      data location;
-      data persistence;
-      data discovery and navigation;
-      data inference;
-      confidentiality;
-      availability;
-      on-demand data secure deleting/shredding [4] [5] [11] [12].

These properties should be defined during the service design and data models play the “on-premise” integral role in defining, managing and protecting healthcare data in the Cloud. When creating healthcare data models, the service is created as well and properties for confidentiality, availability, authenticity, authorization, authentication and integrity [12] have to be defined inside: here is how MaaS provides preconfigured service properties.

Applying MaaS to Healthcare – Getting Practice
Applying MaaS to design and deploy healthcare services means explaining how apply the DaaS (Database as a Service, see [2] and [4]) lifecycle to realize faster and positive impacts on the go-live preparation with Cloud services. The Use Case introduces the practices how could be defined the healthcare service and then to translate them into the appropriate guidelines. Therefore, the DaaS lifecycle service practices we are applying are [4]:

Take into account, healthcare is a dynamic complex environment with many actors: patients, physicians, IT professionals, chemists, lab technicians, researchers, health operators…. The Use Case we are introducing tries to consider the whole system. It provides the main tasks along the DaaS lifecycle and so how the medical information might be managed and securely exchanged [12] among stakeholders for multiple entities such as hospital, clinics, pharmacy, labs and insurance companies.

The Use Case
Here is how MaaS might cover the Use Case and DaaS lifecycle best practices integrate the above properties and directions:

Objective To facilitate services to healthcare users and to improve exchange information experience among stakeholders. The Use Case aims to reduce costs of services by rapid data designing, updating, deployment and to provide data audit and control. To improve user experience with healthcare knowledge.
Description Current costs of data design, update and deployment are expensive and healthcare information (clinical, pharmaceutical, prevention, prophylaxis…) is not delivered fast enough based upon user experience;
Costs for hospitalization and treatments information should be predictable based upon user experience and interaction.
Actors Clinical and Research Centres;
Laboratories;
Healthcare Institute/Public Body  (Access Administrators);
Healthcare Institute/Public Body (Credentials, Roles Providers);
Patients;
IT Operations (Cloud Providers, Storage Providers, Clinical Application Providers).
Requirements Reducing costs and rapidly delivering relevant data to users, stakeholders and healthcare institutes;
Enabling decision making information to actors who regularly need access [11] [12] to healthcare services but lack the scale to exchange (and require) more dedicated services and support;
Fast supporting and updating healthcare data to users due to large reference base with many locations and disparate applications;
Ensuring compliance and governance directions are currently applied, revised and supervised;
Data security, confidentiality, availability, authenticity, authorization, authentication and integrity to be defined “on-premise”.
Pre-processing and post-processing Implementing and sharing data models;
Designing data model properties according to private, public and/or hybrid Cloud requirements;
Designing “on-premise” of the data storage model;
Modeling data to calculate “a priori” physical resources allocation;
Modeling data to predict usage “early” and to optimize database handling;
Outage is covered by versions and changes archived based on model partitioning;
Content discovery assists in identifying and auditing data to restore the service to previous versions and to irrecoverably destroying the data, if necessary, is asked by the regulations.
Included and extended use case Deployment is guided from model properties and architecture definition;
Mapping of data is defined and updated, checking whether the infrastructure provider has persistence and finding out whether outages are related to on-line tasks;
Deploying and sharing are guided from model properties and architecture definition.

Following, we apply MaaS’ properties (a subset) to the above healthcare Use Case. Per contra, Data Model properties (a subset) are applied along the DaaS lifecycle states:


MaaS Properties

DaaS Lifecycle States

Healthcare Data Model Properties
Data Location Create Data Model
Model Archive and Change
Deploy and Share
Data models contain partitioning properties and can include data location constraints. User tagging of data (a common Web 2.0 practice, through the use of clinic user-defined properties) should be managed. Support to compliant storage for preventative care data records should be provided
Data persistence Create Data Model
Model Archive & Change
Secure delete
For any partition, sub-model, or version of models, data model has to label and trace data location. Model defines a map specifying where data is stored (ambulatory care, clinical files have different storages). Providers persistence can be registered. Data discovery can update partition properties to identify where data is located
Data inference Create Data Model Data model has to support inference and special data aggregation: ambulatory might inference patient’s insurance file. All inferences and aggregations are defined, updated and tested into the model
Confidentiality Create Data Model
Populate, Use and Test
Data model guides rights assignment, access controls, rights management, and application data security starting from data model. As different tenants (hospitals, clinics, insurance companies and pharmacies) access the data, users and tenants should be defined inside the model. Logical and physical controls have to be set
High availability Deploy and Share
Model Archive and Change
Data model and partitioning configuration together with model changes and versions permits mastering of a recovery scheme and restoration when needed. Data inventory (classified by Surgery, Radiology, Cardiology, for example) vs discovery have to be traced and set.
Fast updates at low cost Create Data Model
Generate Schema/Update Data Model
Data reverse and forward engineering permits change management and version optimization in real-time directly on data deployed properties
Multi-database partitioning Create Data Model
Deploy and Share
Bi-directional partitioning in terms of deployment, storage, and evolution through model versioning has to be set. Multi-DBMS version management helps in sharing multi-partitioning deployments: for example, Insurance and Surgery by Patient, normally are partitioned and belong to different tenants vs different databases
Near-zero configuration and administration Create Data Model
Generate Schema/Update Data Model
Data models cover and contain all data properties including scripts, stored procedures, queries, partitions, changes and all configuration and administration properties. This means administrative actions decrease to leave more time for data design and update (and deployment). Regulation compliance can be a frequent administration task: models ensure that healthcare compliance and governance is currently aligned

The Outcome
MaaS defines service properties through which the DaaS process can be implemented and maintained. As a consequence, applying the Use Case through the introduced directions, the following results should be outlined.

Qualitative Outcomes:
1)    Healthcare actors share information on the basis of defined “on-premise” data models: models can be implemented and deployed using a model-driven paradigm;
2)    Data Models are standardized in terms of naming convention and conceptual templates (Pharma, Insurance, Municipality… and so on): in fact, models can be modified and updated with respect the knowledge they were initially designed;
3)    Storage and partitioning in the Cloud can be defined “a priori” and periodic audits can be set to certify that data are coherent and remain linked to specific sites;
4)    The users consult the information and perform 2 tasks:
4.1) try the (best) search and navigate the knowledge for personal and work activities;
4.2) give back information about user experience and practice/procedures that should be updated, rearranged, downsized or extended depending upon community needs, types of interaction, events or public specific situations.
5)    Models are “on-premise” policy-driven tools. Regulation compliance rules can be included in the data model. Changes on current compliance constraints means changes on the data model before it is deployed with the new version.

Quantitative Outcomes:
1)    Measurable and traceable costs reduction (to be calculated as a function of annual Cloud Fee, Resources tuning and TCO);
2)    Time reduction in terms of knowledge fast design, update, deployment, portability, reuse (to be calculated as a function of SLA, data and application management effort and ROI);
3)    Risk reduction accordingly to “on-premise” Cloud service design and control (to be calculated as a function of recovery time, chargeback on cost of applied countermeasures compared with periodical audit based upon model information).

Conclusion
MaaS might provide the real opportunity to offer a unique utility-style model life cycle to accelerate cloud data optimization and performance in the healthcare network. MaaS applied to healthcare services might be the right way to transform the medical service delivery in the Cloud. MaaS defines “on-premise” data security, coherence, outage, availability, geo-location and an assisted service deployment. Models are adaptable to various departmental needs and organizational sizes, simplify and align healthcare domain-specific knowledge combining the data model approach and the on-demand nature of cloud computing. MaaS agility is the key requirements of data services design, incremental data deployment and progressive data structure provisioning. Finally, the model approach allows the validation of service evolution. The models’ versions and configurations are a catalogue to manage both data regulation compliance [12] and data contract’s clauses in the Cloud among IT, Providers and Healthcare actors [9].

References
[1] N. Piscopo - ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
[2] N. Piscopo - CA ERwin® Data Modeler’s Role in the Relational Cloud
[3] D. Burbank, S. Hoberman - Data Modeling Made Simple with CA ERwin® Data Modeler r8
[4] N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
[5] N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle
[6] N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data http://cloudbestpractices.wordpress.com/2012/10/21/maas/
[7] N. Piscopo - MaaS Workshop, Awareness, Courses Syllabus
[8] N. Piscopo - DaaS Workshop, Awareness, Courses Syllabus
[9] N. Piscopo – Applying MaaS to DaaS (Database as a Service ) Contracts. An intorduction to the Practice http://cloudbestpractices.wordpress.com/2012/11/04/applying-maas-to-daas/
[10] N. M. Josuttis – SOA in Practice
[11] H. A. J. Narayanan, M. H. GüneşEnsuring Access Control in Cloud Provisioned Healthcare Systems
[12] Kantara Initiatives -http://kantarainitiative.org/confluence/display/uma/UMA+Scenarios+and+Use+Cases

Disclamer
This document is provided AS-IS for your informational purposes only. In no event the contains of “How MaaS might be applied to Healthcare – A Use Case” will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this documentation or the products, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document or the products’ use or performance. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies/offices.

Oracle Database Backups to the Amazon Cloud

The traditional way of performing backups includes using Oracle RMAN in combination with media management layer software ( typically Netbackup, Tivoli or similar ), which writes backup data to remote robotic tape unit. Tapes are then stored offsite to a secure location. It is well known fact that tape media poses certain challenges in reliability and physical manipulation areas.

Cloud-based backups’ main attraction is that they are inherently disk based, always accessible, offsite and there are no capex expenditures. All tape related costs are thus eliminated. On the other hand new costs will be incurred for Cloud backups storage and service. Data is transferred over the public network i.e. Internet.

Cloud based backups can be used for quick database refresh or duplication of source databases to any target environment. Practically you have an unlimited amount of storage that can be instantly attached to any database server as temporary storage for backups and restores. For example, you might want to create new QA environment using development database as data source. This can be achieved by backing up development database  to Amazon S3, then restoring to QA.

Technology to perform tightly integrated Oracle backups to Amazon Cloud ( S3) is available. Please refer to http://www.oracle.com/technetwork/database/features/availability/twp-oracledbcloudbackup-130129.pdf for technical details. RMAN is integrated with Amazon via Oracle Secure Backup (OSB) Cloud Module which automatically directs backups to Amazon S3 storage. Backups can be encrypted and run in parallel over multiple channels to comply with security and performance requirements.

Simple change in your RMAN configuration parameters will redirect backups to the Amazon Cloud. RMAN parameters will have to be carefully configured to take maximum advantage of compression and parallel execution in order to minimize impact of network speed to data transfer rates.

For databases larger than couple of hundred of Gigabytes it would be impossible to rely on standard Internet, out-of-the-box data transfer rates. Amazon AWS Direct Connect service lets you establish direct connection from your on-premise network to Amazon VPC using one or more 1Gbps and 10Gbps connections. There is no charge for IN data transfers, which makes it ideally suited for backup purposes.

Open source fast file transfer protocol called Tsunami UDP provides faster data transfers than what is possible with ftp.

Additional products like Aspera that boost network data transfer rates are being introduced that make it possible to move terabytes of data on a daily basis. Please refer to http://aws.amazon.com/solutions/solution-providers/aspera/. There is additional cost associated with Aspera usage.

Restore, i.e., DR tactics will not have to be significantly modified to take advantage of cloud based backups.

OSB Cloud module is currently available for Linux 32 and 64, SPARC 64, and Windows 32 bit environments.

Designing Applications for the Cloud

When designing applications for the cloud, or extending on-premise applications into the cloud, it should go without saying that you can’t just deploy and expect good results. There is much to consider from the very beginning as it relates to using cloud platforms fin development; these include scaling out, taking new and imaginative approaches to data storage, making full use of the wide range of products and services on offer from cloud providers (beyond hosting), and exploring the many flavours of hybrid solution which can mean all types of business can leverage the benefits of the cloud. These details are laid out further in the following SlideShare presentation.

“Architecting for the Cloud” is the theme for the upcoming Amazon Web Services User Group UK meetup (15th May, London). Intechnica’s Technical Director Andy Still will be there, and plans to talk about extending an application to create a caching platform for mobile access within AWS. If you’re in the London area this is definitely worth coming along to for the discussions and networking around AWS and cloud computing.

Read more blogs about cloud, development and application performance from Intechnica

Cloud Computing Use Case: Development & Test Environments

In a recent article “Put Your Test Lab In The Cloud”, InformationWeek outlined the pros, cons and considerations you must take into account when talking about hosting test labs in the cloud. Using the cloud for this purpose is not necessarily a new idea, and it’s one that certainly makes a lot of sense; Replication of test results depends upon consistency across all variables, and putting a test lab in the cloud allows you to do that from anywhere or for anyone who needs to use it.

Indeed, the use of private or public cloud services, like Amazon Web Services, as a platform for software development & testing, is common practice for some businesses already. The benefits of using the cloud for this include the general positives of cloud, such as cost savings (in terms of the lack of start up cost as well as hardware upgrades, maintenance etc. coming out of the equation), but also extend to specific benefits, like increased control over projects, quick duplication of environments (especially when compared to “tin” set ups), speed of deployment, ease of collaboration, and the ability for testers and developers to access environments on demand, removing a barrier to efficiency. It’s not hard to see why the practice it growing in popularity along with other cloud services.

To best understand the benefits of cloud computing in software development and test environments, it’s useful to see the process in action. We recently hosted a webinar showing the process in detail, from configuring a template for the environment, to launching and connecting remotely to the machine image. In our example, we used Amazon Web Services with a custom management tool, but the process is fairly standard.

It’s important to note that different considerations need to be made for each cloud service provider, especially when weighing up public and private cloud offerings. Obviously, it’s faster and easier to get started with a public cloud, but it can be harder to manage costs, and some would consider a layer of control to be lost. On the other hand, private clouds are costly and time consuming to set up in comparison, and it’s a much bigger consideration to justify.