How does digital transformation actually work?

To master digital transformation in your business and put data-driven business models into practice, a digital mindset and comprehensive empowerment originating with corporate management is required.

IBM PowerAI

By Trond Bjerkvold.

What AI and machine learning are and how they relate to IoT

We can better exploit new opportunities when we understand what new technologies involve and how they interact. Today’s topic is what artificial intelligence and machine learning are, and an insight into how they can relate to IoT and Big Data.

What is DevOps? – A definition

DevOps has become the go-to concept for companies looking to optimize agile processes. However, many find it difficult to understand what exactly DevOps is, what it looks like in practice, and how far-reaching its implementation can be. We explain all in this post.

What is Big Data? – A definition with five Vs

To define where Big Data begins and from which point the targeted use of data become a Big Data project, you need to take a look at the details and key features of Big Data. Its definition is most commonly based on the 3-V model from the analysts at Gartner and, while this model is certainly important and correct, it is now time to add another two crucial factors.

Big Data definition – the three fundamental Vs:

  • Volume defines the huge amount of data that is produced each day by companies, for example. The generation of data is so large and complex that it can no longer be saved or analyzed using conventional data processing methods.
  • Variety refers to the diversity of data types and data sources. 80 percent of the data in the world today is unstructured and at first glance does not show any indication of relationships. Thanks to Big Data such algorithms, data is able to be sorted in a structured manner and examined for relationships. Data does not always comprise only conventional datasets, but also images, videos and speech recordings.
  • Velocity refers to the speed with which the data is generated, analyzed and reprocessed. Today this is mostly possible within a fraction of a second, known as real time.

Big Data definition – two crucial, additional Vs:

  • Validity is the guarantee of the data quality or, alternatively, Veracity is the authenticity and credibility of the data. Big Data involves working with all degrees of quality, since the Volume factor usually results in a shortage of quality.
  • Value denotes the added value for companies. Many companies have recently established their own data platforms, filled their data pools and invested a lot of money in infrastructure. It is now a question of generating business value from their investments.

As we wrote in our previous blog post, defining Big Data is not so easy since the term relates to many aspects and disciplines. And for many people the most important thing is companies’ success (Value), the key to which is gaining new information – which must be available to many users very quickly (Velocity) – using huge amounts of data (Volume) from highly diverse sources (Variety) and of differing quality (Validity), in order to be able to quickly make important decisions to gain or maintain competitive advantage.

In the book “Big Data – Using smart Big Data analytics and metrics to make better decisions and improve performance” Bernard Marr writes that if Big Data ultimately did not result in an advantage then it would be useless. We could not agree more.

 

 

Don’t let big data turn us into Big Brothers

What if Big Brother had access to big data technology? Big data handled without care might easily turn us all unknowingly into Big Brothers or their collaborators.

Although some might see it differently, the society foreseen and described by George Orwell in the dystopian novel 1984 has not become reality. But what if Big Brother had access to big data technology? Big data handled without care might easily turn us all unknowingly into Big Brothers or their collaborators.

Big Brother is symbolic of the totalitarian state Oceania where every citizen was under constant surveillance by the authorities.

Digital computers just recently were invented when 1984 was published in 1949 and were hardly known to people in general. Computers and big data play no real role in the novel, but it is easy to imagine what Big Brother could have done if they were able to capture, curate, manage and process huge volumes of data.

Big Brother better off with computers

Without doubt, Big Brother would be far better off with big data capabilities. They had the manpower but not the necessary data tools.

Today we have those tools. We do not even need the manpower Big Brother possessed. Automatic capture and processing of tremendous amounts of data can be handled by computers, machine learning and Artificial Intelligence, assisted by a few people.

Read more: the Basefarm & *um big data definition.

What we define as big data lakes are key and the resource for big data analyses. The data lake sources can be multiple and the potential results from data analyses can be very interesting.

Huge big data potential

By looking into big data, we can reveal customers and entire societies, including new ways of services distribution and even entire new business models. Basefarm is capable of providing these kinds of analyses.

The view of some big data evangelists is that companies possessing big data capacities might be in position to redefine their entire business. For instance, logistic companies produce enormous amounts of data. Evangelists suggest that these data represent such value that utilizing the data can be core for such companies in the future, not the original logistics business.

Will they also become Big Brothers?

Avoid the dark path

Unless companies are careful, that might very well be the outcome. The path to Big Brother status starts with what data you collect in in the big data lake.

Security and compliance are an integrated part of daily Basefarm operations. The value of this knowledge is even higher in the new world of increasing capabilities to collect, curate, communicate and move data.

Without compliance work, we can all easily step over the threshold and become something far from our intentions.

The Ministry of Love wants your logs

An example of the road to becoming Big Brother is how we handle logs. Infrastructure and application logs are true big data sources. In Basefarm we are enthusiastic about the opportunities provided by these sources. So much information is available to increase production and the customer experience, even leading to completely new ways of serving our customers.

However, logs contain geographical distribution, processing and personal data which are regulated in GDPR. Not least, a big topic is who can access the data. If the logs contain information about personal health, maybe only medical doctors or psychologists are allowed to access them. You definitely don’t want them falling into the hands of Big Brother.

The log data example is interesting. For all Basefarm knows, IT staff might already be unknowingly handling logs in a way that does not comply with regulations.

Unknowing collaborator

No sane person would like to be associated with Big Brother. We do not want to contribute to others becoming Big Brother, and definitely not by them using our data.

To avoid this we need comprehensive control of our data. We need to control where it is, what it contains, who has access and how it is shared with governments, service partners and companies which provide big data services like Basefarm.

The key to avoiding collaborating with Big Brother is to handle your data correctly. It is a priceless asset for your business and you don’t want it falling into the wrong hands.Don’t risk becoming a Big Brother. Through compliance with GDPR and other regulations we can derive huge value from big data.

This might also be interesting

Download Data Thinking whitepaper

Data Thinking addresses the subject areas of our time: data, algorithms, compute and mindset. To comprehensively support companies during times of great complexity and to supervise them with their own digital development.
Learn how you can benefits from Data Thinking.

Watch recording from our GDPR Webinar

May 25 is coming soon: do you know all the responsibilities of data controllers and processers? Listen to our guide and learn who does what!

Data is stupid; using it is clever

The rise of big data opens up new possibilities. By investing in the future and exploring use cases together with customers in many industries, Basefarm creates market leaders.

Use cases for big data projects are everywhere. Take, for instance, predictive maintenance in the offshore industry (e.g. wind turbine maintenance) and the merits of the 360-degree customer view in the hospitality industry. But to flourish in this rapidly evolving world, it’s increasingly important to be agile and flexible. Many of Basefarm’s customers face the challenge of mixing and matching agile ways of working (such as DevOps environments) with traditional processes and infrastructures, resulting in a hybrid delivery model and a hybrid business.


“With this in mind, areas such as security, IoT and big data need extra focus,” says Stefan Månsby, Senior Director of Product Management & Big Data at Basefarm. “With our security division, we deliver 24/7 security services. And now we are also helping many of our customers to understand that very often, they have a golden opportunity to apply their domain expertise to their existing wealth of unexplored data.”


Data is the new oil

Businesses themselves are also becoming more data-driven. Companies are becoming more “hybrid” from a technical point of view, mixing and matching traditional and modern IT infrastructures. By making all their data available in one large pool, they embrace a new way of decision-making where companies rely on data science. Often, this opens up new possibilities for non-linear growth, leading to companies crossing the traditional boundaries between industries. A well-known example of this is Tesla. In their mission is to accelerate the world’s transition to sustainable energy, they build solar panels, batteries and electric cars.

A comparison to Tesla doesn’t do much justice to most companies. But this doesn’t mean they shouldn’t embrace big data. The most typical big data use cases show up in manufacturing, service and maintenance. The potential benefits of predictive maintenance, for example, are huge. By collecting and analyzing data from machine parts, it becomes possible to predict failure and to schedule maintenance. One Basefarm customer performs maintenance on wind farms in the Baltic Sea. With only a few ships available that can hoist ball bearings into wind turbines, they save millions of euros every year by letting AI calculate the optimal shipping routes.

You have the data; now use it

There are numerous examples of big data use cases. Månsby: “At Basefarm, we organize workshops with our customers which generate hundreds of ideas and scenarios.”

The next step is often to design a Proof of Concept (PoC) to present to the company’s board.

“Basically, we can go from whiteboard to a working first PoC in 8 to 12 weeks,” Månsby says. “The size of the company doesn’t matter. Whether you are BMW or a small enterprise, it doesn’t make any difference. If your company has a top-heavy culture, for instance, and data science seems a bit too ‘Star Trekky’ for the CXO’s, we sometimes give the CXO access to a small subset of data to play around with on a Notebook. So they get a feel for the possibilities and start to understand that this technology isn’t black magic or an experimental lab product. It’s very real, it’s now and helps you achieve big goals like major improvements in efficiency, becoming more sustainable and finding new revenue streams.”

About Stefan Månsby

Stefan Månsby is Senior Director of Product Management & Big Data at Basefarm. He has a broad experience in the IT industry and has driven change in many organizations throughout the years. His main passion is digital innovation and he is a great photographer and music producer.

Data thinking is the holy grail of organic growth

Where does success come from? Nowadays, data thinking is a key component. It’s the culture that is responsible for SpaceX’s pioneering Falcon Heavy rocket launch as well as the secret behind hotels and bars remembering your favorite drink.

If there is anything that drives the most successful businesses right now, it is the clever use of data. Seen in this light, the acquisition by Basefarm of the Berlin-based The Unbelievable Machine Company (*um), the leading service provider for big data, cloud and managed cloud services in Germany and Austria, comes at exactly the right moment.

“Many of our customers are huge data owners. Data is the asset of the future,” explains Stefan Månsby, Senior Director of Product Management & Big Data at Basefarm. “European companies need to catch up with their North American counterparts. The big boys in Silicon Valley, such as Amazon and Google, are leading the race and there is nothing wrong with that. But some parts of Europe lag almost a decade behind when it comes to big data maturity. This needs to change.”

Great data leads to great ideas

Amongst many other industries, airlines and leisure companies will benefit greatly from having a 360-degree view of the customer. By gaining insight into customer behavior and needs, they can turn the customer’s next flight or stay into a ”super-tailored experience” because they already know the customer’s exact preferences. Even a result as simple as having your favorite drink waiting for you when you arrive at a hotel can make a big difference. But how do you get there as a company? You have to concentrate on data first, by putting all your data in one place.

“The first thing we recommend is what we call ‘data thinking’,” says Månsby. ” You provide the essential hard data so a company can make necessary decisions “Part of this is data science. You test hypotheses and either they make sense and get you the revenue, or they are a bad idea but you learn from it. By investing in such an agile culture, you can set yourself apart from your competitors and gain a market advantage. Focus on the idea of what you would like to do, not how you will technically solve it. The idea will make your business unique and a leader, not the technology.”

Elon Musk: solar panels, batteries, cars and rockets

A big difference between traditional business and business that relies on data thinking lies in the way they evolve. With the latter, this is far from linear. An example is a company that builds self-driving buses. Their core business is to make such vehicles but, once the buses are driving around in cities, the company can start a side business in traffic reports based on the data they have collected. The new revenue streams could potentially even make public transport free for passengers.

“Data thinking enables new opportunities,” Månsby says. “Look at Ikea. Data thinking has made it Sweden’s second largest food exporter. Another example is Tesla, their mission is to accelerate the world’s transition to sustainable energy. Hence, they need to develop the ultimate battery and then apply them in great cars to prove their point. That’s amazing. As a data-thinking company, you have a big advantage over linear competitors.”

Do you want to know more:

Here you find our Data Thinking webinar recordings about AI: https://basefarm.se/en/big-data/

About the Author

Stefan Månsby is Senior Director of Product Management & Big Data at Basefarm. He has a broad experience in the IT industry and has driven change in many organizations throughout the years. His main passion is digital innovation and he is a great photographer and music producer.

Ready to deep dive into the data lake?

Think data lakes are just a new incarnation of data warehouses? Our resident expert Ingo Steins rates the two.

Data lakes and data warehouses only have one thing in common, and that is the fact that they are both designed to store data. Apart from that, the systems have fundamentally different applications and offer different options to users.

A data lake is a storage system or repository that gathers together enormous volumes of unstructured raw data. Like a lake, the system is fed by many different sources and data flows. Data lakes allow you to store vast quantities of highly diverse data and use it for big data analysis.

A data warehouse is a central repository for company management, so it’s quite different. Its primary role is as a component of business intelligence: it stores figures for use in process optimization planning, or for determining the strategic direction of the company. It also supports business reporting, so the data it contains must all be structured and in the same format.

Challenges with data warehouses

Data warehouses aren’t actually designed for large-scale data analysis, and when used in this way these systems will reach their structural and capacity limits very quickly. We now generate enormous volumes of unstructured data which needs to be processed quickly.

Another limitation is the fact that high-quality analyses now draw on a variety of different data sources in different formats, including social media, weblogs, sensors and mobile technology.

A data warehouse can be very expensive. Large providers such as SAP, Microsoft and Oracle offer various data warehouse models, but you generally need relatively new hardware and people with the expertise to manage the systems.

Data warehouses also suffer from performance weaknesses. Their loading processes are complex and take hours, the implementation of changes is a slow and laborious process, and there are several steps to go through before you can generate even a simple analysis or report.

Virtually limitless data lakes

Data lakes, on the other hand, are virtually limitless. They aren’t products in the same way that data warehouses are, but are more of a concept that is put together individually and can be expanded infinitely.

Data lakes can store infinite different data formats in very high volumes for indefinite periods of time. Because they are built using standard software, the memory is comparatively cost-effective too.

Data lakes can store huge volumes of data, but need no complex formatting or maintenance. The system doesn’t impose any limits on processes or processing speeds – in fact, it actually opens up new ways to exploit the data you have, and can therefore help companies more generally in the process of digitalization.

Put on your swim suit

All you really need to start a data lake is a suitable database. This is relatively easy to set up with a solution like Hadoop. Companies who want to access a wide range of data and process it effectively in real time to answer highly specialized and complex questions will find that the data lake is the perfect infrastructure to realize this goal.

Ingo Steins

Ingo Steins is Unbelievable Machine’s Deputy Director of Operations, heading up the applications division from our base in Berlin. He has years of experience in software, data development and managing large teams, and now runs three such teams distributed across our sites. Ingo joined The Unbelievable Machine Company in January 2016.

Ingo Steins, Deputy Director of Operations, The Unbelievable Machine Company, part of Basefarm Group since June 2017

Where does Big Data begin? – Many perspectives, one classification

Big Data is a buzz phrase that is used in various situations and is constantly developing.

To classify Big Data decisively is not so easy. Firstly, it is not just a stand-alone term but rather a combination of many aspects to reveal a whole picture. And secondly, Big Data is a buzz phrase that is used in various situations and is constantly developing. It is time to set things straight.

Buzz phrase? Collective term? Synonym?

All of the above. Fundamentally, Big Data represents large digital data volumes as well as the capturing, analyzing and evaluating of it. Therefore, Big Data is also the collective term for all digital technologies, architectures, methods and processes that are required for these tasks. Or as Hasso Plattner says: “Big Data is a synonym for large data volumes in a wide range of application areas as well as for the associated challenge of being able to process them.”

Large data volumes?

Very large. “By the year 2003, humans had created a total of 5 trillion gigabytes of data. In 2011 the same amount was created within 48 hours. Now, creating the same data volume requires just 7 minutes,” illustrated RBB Radioeins in simple and effective terms. Driven by the internet, social networks, mobile devices and the Internet of Things, the worldwide digital data volumes will grow another tenfold by 2020. In Germany alone the current figure of 230 billion GB will rise to 1.1 trillion GB.

This is exactly were Big Data comes into play: The huge data volumes are checked for relationships using a such algorithm, and the whole process requires a combination of several disciplines. “It ranges from traditional informatics and data science to interface design. Machine learningdeep learning and artificial intelligence to mathematics, statistics and data interfaces,” explains Florian Dohmann, Senior Data Scientists at The unbelievable Machine Company. “A lot of this is nothing new, but combining them all creates the basis for new opportunities.”

So it is only about data volumes?

Fundamentally, yes. Big Data is firstly defined by data volumes that are “too large, too complex, change too quickly or are structured too weakly to be analyzed with manual and traditional data processing methods,” according to Wikipedia. But to define where Big Data begins – i.e. from which point the targeted use of data becomes a Big Data project – you need to take a close look at the details.