OpenGov Expert Opinion
Data literacy - Understanding the story being told by the data
The volume of data is growing exponentially in today’s increasingly digitised, interconnected world. Making use of this data is critical if we want to realise the true potential of technology in transforming businesses, governments and ultimately, the lives of people. In this context, data literacy is often invoked as the key to success.
But what exactly does data literacy mean? OpenGov posed this question to Mr. James Fisher, VP, Global Product Marketing at Qlik, a leader in the visual analytics market. Mr. Fisher has nearly two decades’ experience in the global software and consulting businesses focusing on analytics, performance management, finance and mobile solutions.
How would you define data literacy?
Data literacy spans a broad set of processes in the analytics workflow. It is the ability to read, work with, analyse and argue with data. It includes the ability to acquire, structure and analyse data, but critically, it’s the ability to argue with data, tell a story, present a business case and drive an outcome. Hence, I would say data literacy is a set of skills – it’s that ability to understand and derive meaningful information from data.
To illustrate this, let’s use weather as an analogy. Hypothetically speaking, if you were to wake up in Singapore with your weather app forecasting a rainy day ahead at 2 degrees Celsius, what would your intuition say?
You could probably believe that it might rain, but 2 degrees? Your understanding of data and the trends in weather will tell you that’s probably not true. So, you know to question that data. Data literacy involves the skill of an individual to use their own intuition to decipher what the data is telling them and to filter the data that they should not take into consideration. This is where the skills gap starts to become apparent. And this is what we believe is the role of individuals in the future workforce, where people will work with data but also bring their creative and intuitive intelligence to the table.
Any analytics professional would agree on at least these three things - that there’s an explosion in the volume of data, in the capability of computing power to perform analytics, and even in the community of people who want to consume data analytics.
Unlike these three components that can grow exponentially, data literacy — that is the ability to actually use the data— is limited. It takes effort from the people to learn to use and understand data, and that’s why data literacy is so important from a technology perspective.
This is why, at Qlik, our approach centres on the user, and we build our products to support the human analytical process – helping as many people as possible to generate insight and value across an organization to meet their goals. We are essentially applying our technology to help close that data literacy gap and help organisations gain competitive advantage simultaneously.
In your opinion, who are the people who need data analytics skills and how do you see that evolving?
I’ve been in the analytics business for 20 years, with consulting firms and software companies. When I think about the traditional role of business intelligence (BI), it is what I would call report-centric. It is the descriptive type of analytics, merely telling you what happened. Typically, it has been owned by and reach a very small number of people in the organisation, probably 25% of the organisation.
While reporting does not go away, the role of analytics has evolved to be more decision-centric, with less focus on reporting. This starts the process of broadening the use case out to a richer set of users in the organisation. That’s where you start to extend the use of analytics from data scientists to business analysts and in fact, to every business user across the organisation.
That’s the role of a modern BI platform, to provide a single platform for all analytics use cases, from reporting to guided analytics to self-service analytics. Ultimately, this extends beyond the organisation, to customers and partners, when you utilise embedded and custom-built analytics.
At Qlik, we incorporate analytics into the decision-making process. Rather than regarding analytics as a destination, that is something organisation have to go to, we see it to be more of a journey, which is how analytics is customised your use case. By making analytics intuitive and simple to use, it encourages data literacy and helps deal with the differing levels of maturity of users across the organisation. If you can integrate and customise the analytics to the context of each of these individuals, it becomes much more consumable.
This results in the shift of analytics consumption from data scientists to business analysts, business users, knowledge workers, operational workers and the extended ecosystem which could include customers, partners, and distributors.
Do you see this similar shift happening in the public sector?
Absolutely. Interestingly, if you think about different organisations and their propensity to share information outside of their organisations, governments have in fact led the way through open data initiatives or freedom of information initiatives. For them, accountability and proving value for the services they provide is vital.
In the public sector, a great deal of the examples involves the population. It is about taking what exists from a complex set of analytics to comprehend what is happening within a city or organisation and making the key elements available to the electorate. Ultimately, It means making data available to the stakeholders of the government agency in a way that they can consume and gain value.
From my standpoint, Singapore has clearly led the Smart City evolution and dissemination of information. The data.gov.sg website is a great example of this.
Hence, I would argue that the public sector has probably been more advanced than some other industries.
As the public sector opens up more data, is the general public becoming more data literate and what can be done to improve it?
There’s a kind of ‘push and pull’ effect in data literacy.
There’s the pull factor, which is the increasing interest in data from the general public, be it from a professional or personal perspective. For instance, I put a Fitbit device on when I’m training, and immediately I’m analysing how fast I go, what is my fastest time and other relevant performance trackers which I want to know. That’s the pull component where, just like other individuals, I have an interest and want to learn how data can differentiate myself and my career or help me in my personal life. I’m driven to find out and learn about data.
However, if you don’t know how to question and understand the data, or to showcase, present and argue with data to unveil meaningful insights, not only does data become valueless, it can become a distraction and lead you to inaccurate conclusion. There is a massive risk that a lack of data literacy will damage the value that data can bring.
There are two dimensions to close the data literacy gap. One is the role of technology. Clearly, machine learning, artificial intelligence and similar technologies can be applied to enrich the analytics experience for a user and to guide them through the process.
At Qlik,we have made significant investments in embedding visualisation best practices in our technology, providing value-added resources that you wouldn’t necessarily get unless you are a data scientist. We’re building those things fundamentally into the technology.
The other piece is purely around education. We have invested a lot into our own Qlik Continuous Classroom capability, which is an on-demand, distance learning platform comprising more than 125 modules containing videos, exercises, and quizzes. It is not only about how to use a product, but also about best practices in terms of using data and visualising data, how to query and understand what the data is telling you.
We’re seeing increasing demand, particularly in university education, for analytics training. In fact, I believe that individual data journey and analytics training can even start in secondary schools. While there is no need for everyone to become a data scientist, it will be a huge differentiator and value-driver for any country or region that is able to lead in empowering students to have an inherent understanding of how to work with data, at the more fundamental level.
We have more data than ever. But people say we are living a ‘post-truth’ world. Why do you think that is happening now and how do you ensure you’re looking at the right data and looking at it the right way?
Of the numerous challenges, a crucial one is data governance and data structure. The ability to govern data, wherever the data resides, has often been seen as just an exercise in stewardship and a form a control. While that was true in the world of traditional BI, that should not the role that governance plays at present
With the vast volume of data, residing in the multiple locations, governance needs to be the enabler that makes all of that data available to end users in a way that is trusted and consistent.
When one looks at a set of data, one should be confident that it is built and defined in the exact same way that he or she would. However, to achieve that, it is not about the control or restriction. Individuals should have access to all the data, but the key is to have differing levels of access rights to data that is catered to their needs and context. Hence, it is vital to enable self-service use of data to attain such a customized approach without compromising the users’ trust in the data.
People won’t engage with data unless they feel confident about it. There’s not only the data literacy question about how I understand it, but also about whether I know I am looking at the right source of data. Therefore, data governance is increasingly not just about what IT cares about, such as performance, security and compliance. It’s about empowering business users and knowledge workers to use data with utmost trust in a simple, intuitive way.