How can I better manage my business’s multi-cloud state?

The first step is usually to audit your technology footprint and identify all the specific services your employees use. Next, you need to ensure you have access to the requisite expertise.  You also need to determine whether or not you need cloud brokers and/or cloud management tools to be successful. How can I better manage my business’s multi-cloud state?

project-management

Once that’s done, it’s time to put strategies, business processes and personnel in place to ensure that:

  • The right cloud services are deployed to support the right workloads
  • You and your partners are able to support those workloads effectively
  • Various services are integrated as required
  • Security requirements are met
  • You are able to shift appropriately between providers as cost functions change and new features become available
  • You can control Shadow IT while simultaneously enabling teams across the business to quickly access the cloud resources they want and need

Wait… what are cloud brokers and cloud management tools?

Third-party cloud brokers can source, compare and procure cloud services, and will often handle integration, management and accounting. Cloud management tools such as RightScale Cloud Management can help you monitor usage, performance and costs across multi-cloud environments. There are a wide variety of such tools available in the vendor ecosystems around each leading cloud platform.

How do I decide which cloud services best suit which workload?

It really depends on your business and on each particular use case. For example, if you’re a traditional Microsoft IT shop, Azure may be the best fit for many workloads, with a Microsoft Private Cloud architecture for workloads with stringent security or compliance requirements.

However, you may also want to utilize AWS services like Kinesis if you need real-time streaming data, or Rackspace’s ObjectRocket database-as-a-service if your app requires highly performant MongoDB or Redis.

To make the best decisions, you need cloud experts who understand the wide array of services offered by the leading providers, their strengths and weaknesses and how they map to your specific needs.

Backblaze lights up cloud storage with dirt-cheap prices

money-saving-cloud-future-100613728-primary.idgeBackblaze slashes prices on cloud storage to a half-cent per gigabyte per month, but the lack of Amazon API compatibility might limit its appeal

Backblaze, the backup service company that garnered attention for publishing its internal statistics about hard drive failure rates, is throwing open the doors on a cloud storage service with rock-bottom prices.

According to a blog post announcing the new service and its pricing page, Backblaze’s B2 Cloud Storage costs for 0.05 cents per gigabyte per month. Uploads are free; downloads are 5 cents per gigabyte (plus a fee of 0.4 cents per 1,000 transactions). A free tier is also available, where up to 10GB can be stored at no cost, albeit with a download limit of 1GB or 2,500 downloads per day, whichever comes first.

Backblaze sees its main customers as developers, who can access B2 through a RESTful API, and users, who can go through a Web-based interface to upload data. The latter will probably see B2 as a Dropbox competitor, although B2 doesn’t currently have desktop or mobile clients like Dropbox.

Developers and enterprise IT customers could use B2 as a cheap mirror for data either in an existing cloud storage service or on-premises data center. In that case, B2’s value doesn’t revolve around its price, but whether the bandwidth and latency to and from the B2 data center will be up to snuff.

Another possible issue: B2 is served by only one data center. According to a discussion thread on Hacker News (with replies by self-identified Backblaze employee brianwski), there are plans to add another data center due to the existing one running out of space. Also under discussion is the possibility of an S3-compatible API — the current one doesn’t have it — but it would require the use of load-balancing technology that Backblaze originally eschewed in order to keep costs down.

OpenStack block storage (Cynder)

will work similar to attaching and detaching an external hard drive to your operating system, for its local use. Block storage is useful for database storage, or raw storage for the server(like format it, mount it and use it), or else you can combine several for distributed file system needs (like you can make a large gluster volume, out of several block storage devices attached to a virtual machine launched by Nova).

The second type of storage full fills the scaling needs, without bounds. You need a storage that can scale without worry. Where your storage need is of static objects. This can be used for storing static large data like backups, archives etc. It can be accessed with its own API, and is replicated cross datacenter, to withstand large disasters.

What I Learned About MapR

MapR, based in San Jose, California, provides a commercial version of Hadoop noted for its fast performance.  This week at the Strata Conference, I got a chance to talk to the folks at MapR and found out how MapR differentiates itself from other Hadoop offerings.

MapR Booth at Strata Conference

The fast speed of MapR appears to come from its filesystem design.  It’s fully compatible with standard open source Hadoop including Hadoop 2.x and YARN and HBase, but with a more optimized filesystem structure to provide the additional speed boost.

MapR promotes these benefits below.

  • No single point of failure
    Normally the NameNode is the single point of failure for a Hadoop installation.  MapR’s design avoids this issue.
  • NFS mount data files
    MapR allows you to NFS mount files into an HDFS cluster.  This ability saves you time from copying files into MapR and you might not even need tools like Flume.  The direct write into the files opens up additional options such as querying Hadoop on near-real-time data.
  • Fast access
    MapR has clocked the fastest data processing with sorting 1.5 trillion bytes in one minute using its MapR Hadoop software on Google Compute Engine cloud service.
  • Binary compatible with Hadoop
    MapR is binary compatible with open source Hadoop, which gives more flexibility in adding other third party components or migrating
  • Enterprise support
    Professional services, enterprise support, and training and certifications

MapR has attracted a number of featured customers including the following:

  • Comscore
  • Cision
  • Linksmart
  • HP
  • Return Path
  • Dotomi
  • Solutionary
  • Trueffect
  • Sociocast
  • Zions Bank
  • Live Nation
  • Cisco
  • Rubicon Project

MapR is also partnering with both Google and Amazon Web Services for cloud-based Hadoop systems.

MapR currently comes in 3 editions.

  • M3 Standard Edition
  • M5 Enterprise Edition (with “99.999% high availability and self-healing”)
  • M7 Enterprise Edition for Hadoop (with fast database)

Additionally, in conjunction with the Strata Conference this week, MapR has announced the release of the MapR Sandbox.  Any user can download the MapR Sandbox for free and run a full MapR Hadoop installation within a VMware or Virtualbox virtual machine.  This sandbox provides a suitable learning environment for those who want to experience the use and operation of MapR Hadoop without investing a lot of effort in the installation.  I haven’t downloaded and installed the MapR Sandbox yet.  If you have already done this and tried it out, tell me what you think in the comments below.

MapR website: http://www.mapr.com

Big Data Analytics – What is that ?

In a recent statistics, IBM estimates that every day 2.5 quintillion bytes of data are created – so much that 90% of the data in the world today has been created in the last two years. It is a mind-boggling figure and the irony is that we feel less informed in spite of having more information available today.

The surprising growth in volumes of data has badly affected today’s business. The online users create content like blog posts, tweets, social networking site interactions and photos. And the servers continuously log messages about what online users are doing.

The online data comes from the posts on the social media sites like Facebook and Twitter, YouTube video, cell phone conversation records etc. This data is called Big Data.

WHAT IS BIG DATA ?

Big Data concept means a datasets which continues to grow so much that it becomes difficult to manage it using existing database management concepts & tools. The difficulty can be related to data capture, storage, search, sharing, analytics and visualization etc.

The Big Data spans across three dimensions: Volume, Velocity and Variety.

  • Volume – The size of data is very large and in terabytes and petabytes.
  • Velocity – It should be used when streaming in to the enterprise in order to maximize its value to the business. The role of time is very critical here.
  • Variety – It extends beyond the structured data, including unstructured data of all varieties: text, audio, video, posts, log files etc.

WHY BIG DATA?

When an enterprise can leverage all the information available with large data rather than just a subset of its data then it has a powerful advantage over the market competitors. Big Data can help to gain insights and make better decisions.

Big Data presents an opportunity to create unprecedented business advantage and better service delivery. It also requires new infrastructure and a new way of thinking about the way business and IT industry works. The concept of Big Data is going to change the way we do things today.

The International Data Corporation (IDC) study predicts that overall data will grow by 50 times by 2020, driven in large part by more embedded systems such as sensors in clothing, medical devices and structures like buildings and bridges. The study also determined that unstructured information – such as files, email and video – will account for 90% of all data created over the next decade. But the number of IT professionals available to manage all that data will only grow by 1.5 times today’s levels.

The digital universe is 1.8 trillion gigabytes in size and stored in 500 quadrillion files. And its size gets more than double in every two years time frame. If we compare the digital universe with our physical universe then it’s nearly as many bits of information in the digital universe as stars in our physical universe.

CHARACTERISTICS OF BIG DATA

A Big Data platform should give a solution which is designed specifically with the needs of the enterprise in the mind. The following are the basic features of a Big Data offering-

  • Comprehensive – It should offer a broad platform and address all three dimensions of the Big Data challenge -Volume, Variety and Velocity.
  • Enterprise-ready – It should include the performance, security, usability and reliability features.
  • Integrated – It should simplify and accelerates the introduction of Big Data technology to enterprise. It should enable integration with information supply chain including databases, data warehouses and business intelligence applications.
  • Open source based – It should be open source technology with the enterprise-class functionality and integration.
  • Low latency reads and updates
  • Robust and fault-tolerant
  • Scalability
  • Extensible
  • Allows adhoc queries
  • Minimal maintenance

BIG DATA CHALLENGES

The main challenges of Big Data are data variety, volume, analytical workload complexity and agility. Many organizations are struggling to deal with the increasing volumes of data. In order to solve this problem, the organizations need to reduce the amount of data being stored and exploit new storage techniques which can further improve performance and storage utilization.

SUMMARY AND CONCLUSION

Big Data is a new gold rush & key enabler for the social business. A large or medium sized company can neither make sense of all the user generated content online nor can collaborate with customers, suppliers and partners effectively on social media channels without using Big Data analytics. The collaboration with customers and insights from user generated online contents are critical for the success in the age of social media.

In a study by McKinsey’s Business Technology Office and McKinsey Global Institute (MGI) firm calculated that the U.S. faces a shortage of 140,000 to 190,000 people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the analysis of Big Data.

The biggest gap is the lack of the skilled managers to make decisions based on analysis by a factor of 10x.Growing talent and building teams to make analytic-based decisions is the key to realize the value of Big Data.

Thank you for reading. Happy Learning !!