Eric D. Schabell: February 2017

Thursday, February 23, 2017

Taste of Summit - Discover the Foundations of Digital Transformation

taste of summit
This year they are giving a few previews for Red Hat Summit, which takes place from 2-4 May this year in Boston, MA.

I will be presenting a session on digital foundations as previous published, but wanted to offer this short Taste of Red Hat Summit to you in advance of the recording that will soon be put online.

My full session will be on the following:

Discover the foundations of digital transformation

The core of digital transformation is the ability to provide technology solutions in a fast paced world to your customers while satisfying business aspirations. Many organizations are following the story line, fighting the good fight, but how can Red Hat and Open Source guide your journey? This session takes you on a journey to start laying the foundations of your digital transformation story based on use cases and examples that you can explore when you return home. Join us for this hour of power, where you are given the inspiration to start building your digital foundations.

Taste of Summit preview can be found here:



Will post a link to the recording when it's live.

Monday, February 20, 2017

Cloud Happiness - OpenShift Container Platform v3.4 install demo updated

cloud happiness
Get OpenShift Container Platform v3.4 today!
There is no easier way to install your very own Cloud than with OpenShift Container Platform (OCP).

A few months back I showed you how to go from no Cloud to fully Cloud enabled with a container based application development platform in just over two minutes.

This was based back then on OCP 3.3, but now it's time to update this project to get you onto the newer version 3.4 with many new features I know you want to get your hands on.

If you have been following my journey through the application development phases of storytelling, you will have seen that I was an early fan of Cloud based solutions like OpenShift. This was a way to take your application development from your local resources and move them onto a remote set of resources, while continuing to work locally as you always have.
cloud happiness
Figure 1. Container images pulled to your box.

As of today I have updated  the OpenShift Container Platform Install Demo to deliver you version 3.4, without changing any of the installation steps. It is so simple, I believe that anyone can set this up in just over two minutes. Let's take a look, as it it only a three step process:

Install in 3 simple steps...

  1. Run 'init.sh', then sit back.
  2. Follow displayed instructions to log in to your brand new OpenShift Container Platform!

cloud happiness
Figure 2. The JBoss product templates
are installed from their image streams.
You need to download and unzip the project, then run the installation script, sit back until you see the output at the end showing you where to log in to your brand new OpenShift Container Platform.

It will check if you have the required tools installed, if not you will get a pointer to where you need to download these requirements. This means you don't have to worry about finding out what you need, just run the installation and it will tell you where to get anything that is missing.

Also note, that if you have run this installation before, it's setup to always give you a clean running installation by fixing anything that is left running or blocking an installation. Not intervention needed by you.

In figure 1 you see the installation starting, where the container layers are being pulled into your machine and setup.

Validation is shown in figure 2, where the IP address of your OCP login console is presented. I then make sure your OCP has the latest greatest JBoss middleware streams loaded and update the RHEL 7 streams.

Now you are almost ready, just need to show you how to log in.
cloud happiness
Figure 3. Final installation details given.

Figure 3 shows the address that was dynamically created (in my case it is showing https://192.168.99.100:8443), just paste it into your browser and you can log in with any of the given users. Also note that you might want to completely clean up this demo by running the command shown.

As I have updated the image streams, it will take some time for them to be pulled into your OCP and appear in your lists of available platforms. Log in with admin user and you will see that you need to create a project, just click on the New Project button.

You can fill in the form shown in figure 4 any way you like, but I chose to line it up as the project that will soon host all my Red Hat Cloud demo projects.

Once you submit that form, you are presented with an overview of the product templates for your projects that I installed above (remember, it might take a few minutes for them all to appear, so take a sip of coffee now as it is your only chance in this process).

cloud happiness
Figure 4. Fill in a new project form as desired.
You can now start using the catalog containing the JBoss middleware product templates to develop applications on the OCP Cloud.

This concludes the installation of OCP and you are now ready to start containerized application development. I assume you can find more information online if you are interested in getting started with the basics of container development on OCP, so I won't go into that here.

If you are looking for some deeper examples of running JBoss middleware on OCP then check out the examples collection at Red Hat Demo Central.

Here's wishing you many happy days of containerized application development in the Cloud!

Thursday, February 16, 2017

Digital ROI - How to approach for digital returns


digital roi
Let’s assume you have started looking at how your digital journey might take shape.

It’s not a matter of deciding if you are going to take a digital transformation journey, but a question of what parts of your existing business are you going to start transforming in to a new digital business.


To engage your customers in new ways, to deliver on their expectations will require transforming many of the foundational pieces in your architecture to take you on the road to accelerating everything. This also can not leave behind the existing applications and value that you have delivered in the past, so there needs to be a way to pick and choose what you are going to transform.
The series on Digital Foundations has discussed how you can lay the bricks to your new digital future with open technologies, to remain both flexible in maintaining existing applications while starting the journey towards transformation to a modern application delivery architecture.

digital foundations
Catch up on the series Digital Foundations...
Now that you are starting to look at how your digital journey might start, you are certainly trying to identify a list of assets or areas in your architecture and applications that can be targeted for transformation to your modern digital architecture.


Digital Returns

One of the first questions that comes up is how can you sell this to your CIO, meaning that some form of Return on Investment (ROI) enters the discussion. First a definition of what we are talking about with digital ROI:

Digital ROI is the benefit gained from open technologies for an investor in digital transformation resulting from an investment in some transformation effort on their digital journey.

In our case we are going to take this down a different path than is traditional with ROI, we are going to avoid the cost aspect entirely. This is a departure from tradition, but with most ROI ‘calculators’ you are left with trust issues when presented with results that are often based on suspect calculations that always show the results a vendor needs to sell you their products.

There are other aspects than just pure cost that open technologies bring to your transformation efforts and they give you more confidence in the eventual cost effectiveness of an effort if you can capture these for your business case.

Open technologies have great influence on the costs involved with digital transformation paths, but they provide much more in the way of indirect benefits that give you a greater trust in the ROI story you are building. It is in these open technologies that we will look for our cost effectiveness and the results might surprise you.

Whether you are looking at the foundations of your infrastructure, the operations and management aspects or interested in modernizing your application development you will find many benefits that better align to your feel for positive gain in your ROI calculations.

In the next article the series Digital ROI, let’s take a look at what might be of interest when engaging open technologies to build your case for transformation.

Monday, February 13, 2017

Ops Happiness - Harness Data for Operations Intelligence

ops happiness
In the pursuit of Ops Happiness...

(Written with guest author: Miguel Perez Colino, Senior Product Manger, Integrated Solutions Business Unit, Red Hat)


As covered in the previous article, The Quest for Operations Intelligence, we have very high expectations from any modern Cloud architecture applications deployed on Red Hat hybrid cloud solutions.

No matter how much support is put into place, the customer needs to be able to operate their hybrid clouds.

After taking a look a correlating all of the available data we reached a conclusion in the previous article that we needed to do something more structured.

It’s all about the model

We may wonder if we have anything missing that's why looking at some metrics aggregation project can help. We find the answer to this in two places. First, Juan Manuel Rey Portal, an Architect with experience in virtualized operations, described how some ops tooling attempts to correlate events with logs. Second, a blog post titled Hawkular APM supports OpenTracing and Alerts describes how they can generate events such as alerts with the collected metrics. 

To summarize, to have a good IT situational awareness we need the following data:
  • Logs
  • Metrics
  • Configuration Data (tracked periodically)
  • Events (Alerts and Actions like Application updates, Software upgrades, etc)
In the process to transform data to information, we need to understand what each piece of data represents. This requires us to have a related data structure for the logs, metrics and configurations that are being processed. There is a need to define a Common Data Model for it, for example starting with the data received from an OpenShift Container Platform. In words of Peter Portante, “A Common Data Model is about defining namespaces to avoid conflicts, defining common fields that should be shared, and providing field data definitions for clarity.”
This is key to provide meaning to data, to integrate different sources, to connect, to share it with different third party analysis tools  and to make IT more understandable. It would be of great value to share such a model, to let it grow in an open source way, into an open standard hosted for example by The Linux Foundation or the Cloud Native Computing Foundation.

Once a Common Data Model adds meaning to the data (some call it tagging, others processing, others enrichment) it becomes information ready to be correlated. We are performing something like Military Intelligence:

 Information collection and analysis to provide guidance and direction to commanders in support of their decisions. This is achieved by providing an assessment of data from a range of sources, directed towards the commander's mission requirements or responding to questions as part of operational or campaign planning. 

For IT Intelligence, instead of commanders we have different personas that could benefit from having all this information aggregated and correlated:

ops happiness


As is often stated at Red Hat, "We grow when we share..." and this also applies to to IT Intelligence. There are many partners that can make good use of the data being collected and processed. It is important to be prepared to share, since it can be re-processed and correlated with even more data from firewalls, network equipment, internal database stats, etc. There is Red Hat Insights and a whole ecosystem of associated tools that can provide added value to our solutions. Having a well defined, unified point of contact to gather this data can help us to reduce the deployment and operational costs of our tooling and third parties tooling. It also gives us the opportunity to have a certification mechanism for it.


In summary, log aggregation is the necessary starting point to have the situational awareness of a full cloud deployment to operate it efficiently. To achieve IT intelligence, more relevant data is required such as metrics, configuration and events. This data has to be interpreted with a Common Data Model to be able to correlate it and transform it into useful information. This could become the access point to a whole ecosystem that can extract even more value from that information.


Call to action


What's next? We are working on prototypes to learn on how to put the above information to work and to learn how users are solving this problem. If you want to lend a hand or join us in this effort, you may contribute in the ViaQ GitHub repo or by filling this form with your own experience.

Thursday, February 9, 2017

Digital Foundations - Paving the road to Cloud solutions

digital foundations
When building anything substantial, such as a house or bridge, you start by laying down a solid foundation.

Nothing changes this aspect of building brick by brick when you move from traditional constructions to application development and architectural design of your supporting infrastructure. Throw in Cloud terminology and you might think that the principles of a solid foundation are a bit flighty, but nothing is further from the truth.

In the previous article, I talked about how as your organization expands, it's hitting walls while attempting to patch together its architecture and scale out infrastructure to meet its needs.

Digital foundations

Let's take a look in this article at how you can now use the previous Cloud use cases and start paving the road to Cloud solutions that support your digital journey:
The path to your Cloud solutions lays paved with open technologies and here is why.

Cloud solutions with open technologies

In previous articles you have walked through the complex landscape that is your modern infrastructure. It contains legacy elements that run in virtualized landscapes and it might have or ascertain ambitions to run modern cloud-native application delivery methods.

Either way, you have looked deeply at four of the common Cloud use cases that describe the abilities that you are trying to apply to your own situations. You want to accelerate your service delivery, optimize existing IT assets, provide massively scalable infrastructure which will enable you to modernize both operations and development processes.

digital foundations
Download this paper today!
All of this is on the road to a digital foundation that can support your transformation to a digital enterprise in the modern era.

While there are many choices as to the technology you might use to help you on your journey, the vast majority are betting on open technologies as their best option. With good reason.

What are some of the reasons behind paving your road to Cloud solutions with open technologies?

The road is open

With studies showing that 76% of Clouds are built on open technologies (Linux Foundation's Global 2013 Survey) it's no wonder enterprises are looking for enterprise support, the ability to get direct input into new features, receive stable and certified components, get their bugs fixed quickly, have predictable product lifecycles and expect advanced functionality.

Another issue is more than half indicated that integrating and protecting their Cloud infrastructure is their primary challenge. This means that they are struggling with high prices for components, vendor controlled innovation when not using open technologies and the mother of all lock in with high exit costs in the proprietary models.

Take a look at the only real solution to your problems and understand how the road to Cloud solutions is paved with open technologies:




The story continues... next up on building the foundations of digital transformation, I am looking at how you can approach digital return on investment (ROI) studies.
.

Monday, February 6, 2017

Ops Happiness - The Quest for Operations Intelligence

ops happiness
In the pursuit of Ops Happiness...
(Written with guest author: Miguel Perez Colino, Senior Product Manger, Integrated Solutions Business Unit, Red Hat)

We have very high expectations from any Cloud Native or mode 2 applications deployed on Red Hat hybrid cloud solutions.


When running Red Hat technologies in production, we want our new workloads to be running on top of certified products. They should be architected and deployed with help from certified professionals, proactively maintained with the help of world class support services and have the option to enable organizational resources with training and certifications.


No matter how much support is put into place, the customer needs to be able to operate their hybrid clouds.


From log aggregation to IT intelligence

Let’s imagine we are delivering a solution for a customer that is building their Digital Foundations to modernize their application development. It’s based on a private Infrastructure as a Service and leverages a container application platform. We can use the Reference Architecture to help deploy our Red Hat Openshift Container Platform on Red Hat OpenStack Platform, and review compatibility beforehand using the Cloud Deployment Planner. Now, as Alessandro Perilli explains in How to manage the cloud journey? complexity can grow with scale, so it's better to tackle it from the very beginning.
ops happiness
Log aggregation?
Now we add management capabilities by starting with the Cloud Management Platform, with Red Hat CloudForms, that can handle the IaaS and container application platform. It manages all the components deployed and provides insights into the microservice applications built with containers, orchestrated by kubernetes, running on instances within a tenant on the capabilities provided by the hardware platform.


With this we have covered Day 0 to be able to plan, Day 1 to be able to deploy and we are facing Day 2 in which we operate the full platform. What would our customers do on Day 2 if they faced a physical issue, such as a network cable or network card failing?
The first step would be to investigate why a particular application is behaving incorrectly. We would review the metrics, the logs and the changes to the application in question and the configuration of the application server only to realize that the root of the issue is somewhere else. Then we would go to the container application platform to get the logs and config changes for it, plus the logs and metrics of the operating system underneath ... all of them, from all the Virtual Machines (VMs) to find out that the root of the issue is somewhere else.


Finally, we would go to the IaaS deployment to get all the logs, metrics and configuration changes performed as well as the ones from the operating system ... all of them, from all the physical machines to realize that the root of the issue is yet again, somewhere else ... in an improperly patched cable or failed piece of physical hardware. Even with the great tools that we have in our management portfolio, finding a root cause for an issue like this requires increasing the situational awareness of our IT. With the number of layers and pieces required for mode 2 deployments and applications, this goes from a "nice to have" to an "absolute need".
You may be asking yourself, how can this be made simpler and easier to trace?


The obvious answer is to start performing log aggregation. We are working in different fronts already. Tushar Katarki presented his thoughts at RHTE APAC 2016 and this work is in progress focusing on log aggregation. As Tushar puts it, it’s already improving the lives of users by reducing the operational burden and improving the efficiency to keep the platform running.


To see how we may improve situational awareness, based on our current log aggregation process, we need to fully understand it. To do so we can focus on the data and its journey from the moment it is generated to the moment it is consumed. To do so I had a talk with Javier Roman Espinar, an Architect with a background in High Performance computing and Big Data deployments, where he explained that this can be considered a Big Data issue and should be analyzed from that perspective.


We can use his article Big Data Enterprise Architecture: Overview  as a starting point to analyze the issue and we will realize that there are several stages for logs (or other data) to make them useful. These are the stages that the data goes through:


ops happiness


And the mapping to our current solution with Elasticsearch + Fluent + Kibana:


ops happiness


... but, what other information can be processed this way that is relevant to the customer. Peter Portante, who has been working intensively on data aggregation had already replied to this question. He explains that we need Logs, but also Metrics (or Telemetry as he describes it) and Configuration to perform a full correlation of all data.


So what’s next?


Next week in part two, the series continues with a deeper look at how we are performing a fuller correlation of all the available data.

Thursday, February 2, 2017

Red Hat Summit 2017 - Discover the foundations of digital transformation (session)

red hat summit 2017
As previously posted, this years Red Hat Summit will be in Boston, MA from May 2-4, 2017. You don't want to miss this as it will not be in the Hynes Center, but down by the waterfront at the new Boston Conference Center.

You can register today online.

I have had my session listed below accepted and am also chairing the labs track at Red Hat Summit. This means I will be speaking and helping ensure you have a good representation of Red Hat technologies to get your hands on during the week.

There is a good chance I might have something exciting around my BPM book project to tell you about at Red Hat Summit, so stay tuned.

Discover the foundations of digital transformation

The core of digital transformation is the ability to provide technology solutions in a fast paced world to your customers while satisfying business aspirations. Many organizations are following the story line, fighting the good fight, but how can Red Hat and Open Source guide your journey? This session takes you on a journey to start laying the foundations of your digital transformation story based on use cases and examples that you can explore when you return home. Join us for this hour of power, where you are given the inspiration to start building your digital foundations.


Looking forward to seeing you all there!