Contact information.
-
Address"Ekzarh Yosif" street, 31, 1000 Sofia, Bulgaria
-
Phone:+359 2 980 95 99
-
Websitewww.balkanservices.com
You’d be hard-pressed to find anyone who would deny that the world is in the midst of an artificial intelligence (AI) boom, and generative AI will lead us into a new era of productivity and prosperity.
Along with the opportunities that AI provides us, there are also growing concerns about the topic.
To date, access to data on generative AI has been free, creating the conditions for skewed or outright false data. This makes some results unreliable or misleading, which can become a threat to business and society. For AI to work correctly and be reliable for business, it needs to be fed with demonstrably valid and correct data.
Moving from massive amounts to higher quality data can be thought of as moving up the value chain consisting of 5 data elements:
Big data is often attributed to 3 of these elements – volume, velocity, and variety. This is because they represent a huge amount of data, often accumulated quickly and from a variety of sources.
Credible, higher quality data goes further, incorporating the other two, arguably most important elements of the value chain – validity and value. Credible data are data collected from a variety of verified sources that have traceable provenance. This data can be refined, aggregated, and managed, i.e. treated as a known, quality product that can be shared and traded.
There is a lot of talk in the space about generative AI, which has huge potential. However, there have been efforts in other areas of AI for years, some of which have started to bear fruit. One example is machine learning (ML), which is being used in an increasing number of fields.
One of the biggest misconceptions is that generative AI will replace all areas of artificial intelligence, which would be a big mistake.
Last year, BARC(Business Application Research Centre) announced that the “playtime” with traditional AI efforts is over, meaning that AI is mature enough to be put into production.
While generative AI is finding its place, machine learning and other AIs have already shown limitless potential for application and improving business productivity.
Not everyone wants to create apps. Most of us belong to the “other 75%” of users who may not even realize when they’ve interacted with analytics tools.
All this type of user wants is instant answers and they don’t have the time, desire, or skills to perform analytics.
Users also tend to trust people more than data, so collaboration and data sharing are key. This user base appreciates automatically generated visualizations and insights, complete with natural language explanations. If this happened within the business systems they work with, it would be particularly useful to them.
The majority (80% according to Forrester) of the world’s data is unstructured – unordered into rows and columns. Emails and documents on your intranet are an example.
Unstructured data is hard to analyze, but with new metadata and semantic techniques, we can change that.
Through the use of Knowledge Graphs and vector databases augmented with RAG (Retrieval, Augmentation, Generation), the possibilities for reliably combining structured and unstructured data are endless. Combined with the ability to manage answers, you can reuse verified and reliable questions and answers, allowing you to scan the entire database and use private Large Language Models (LLM) created internally through data analysis.
The revolution in generative artificial intelligence is moving with tremendous speed, enabling new ways of interacting with data.
You can now add a file to a chat interface and start talking to AI in one swipe. It can generate queries and code, help create content, and speed up automated processes.
Increasingly, people can begin their analytics journey with these generative AI tools, using them for simplified data visualization and business insights. This is BI in AI.
Read more: Business Intelligence software- what, for whom and why
As the collaboration between the two types of tools evolves, we will increasingly switch between these two different modes to get the most out of each platform.
If data quality and provenance were important before, in the AI world they are no longer up for debate. It’s critical for both the data that drives your business and for training AI models.
The need for recognizable and clear data provenance is particularly acute in public large language models (LLMs), where provenance is currently untraceable. Without this knowledge, it is difficult for even the best generative AI models to distinguish fact from fiction. This can lead to false facts and deepfakes.
For enterprises, trusting such data can have serious consequences. That’s why organizations need to prioritize this issue now.
We need a mechanism to clearly label and tag data using provenance and cryptography techniques, along with techniques not yet invented, to create the equivalent of a ‘DNA test for your data’.
There are already organizations working on this, such as the ‘Content Provenance and Authenticity Coalition’, of which Intel, BBC, and Sony are members; Google Watermarking (SynthID). When there is trust in the provenance and traceability of data, a self-sustaining cycle is triggered where people take responsibility for the data. This is also one of the most important conditions for turning proprietary corporate data into products.
In a short time, we have witnessed an evolution from applications requiring low-level code usage to applications using only language models.
This trend will give rise to many new apps created by the “ordinary developer”, leading to a flurry of innovation. On the other hand, it is likely to lead to management chaos and app glut.
This process is very empowering and organizations would do well to take steps to educate their employees about the benefits and pitfalls of generative AI.
If the last five years were all about training teams on data literacy, now companies would do well to move towards AI literacy.
According to IDC, enterprises prefer to work across the entire data stream with the best in the field, as few as possible, ideally only 1, vendor. The new platforms will almost unify data engineering, with powerful artificial intelligence, automation, and data science.
This will enable business analysts to perform data management and preparation tasks and apply advanced statistical models to the data and tools they work with every day, without the need to export it into an advanced work environment.
Streamlining processes and merging the roles and capabilities of data engineering, data science, and analytics will enable organizations to solve more strategic case studies.
We’ll move from asking how much profit was made this quarter to asking, “Which customers should we target going forward?” and “Which key employees are at risk of leaving and what are the factors driving that decision?”
Our team is here to listen carefully and offer the right solution for you.
So far, large language models (LLM) and generative AI (from artificial intelligence) have been used mainly to aid reasoning and conduct analysis, not to take action. But efforts are now being made to support the latter, including an approach to LLM that incorporates the synergy of reasoning and action.
Of course, this requires transformed data in near real-time and in the right place. In organizations, we will start to see new ways to use generative AI with application automation.
The generative AI associated with automation will mean less manual work for humans to connect and build workflows, focusing instead on making and managing decisions.
Read more: What is the price of a Business intelligence software [Part 1]
Currently, early applications of generative AI are massively scalable and are typically considered in the context of business-to-consumer (B2C) relationships.
Over time, we will increasingly see AI adapted to industry and more specific business-to-business (B2B) use cases. This will take the form of private Large Language Models (LLM) and applications where the foundation may be generic but with layers of customization. An example is the artificial intelligence cluster that Mark Zuckerberg is building for medical research.
Your organizational data will be valuable ‘raw material’ here and ‘solution factories’ will emerge where domain-specific data and applications can be shared and traded. However, the question of which artificial intelligence will be the basis for building this remains unanswered.
Architectural approaches to harmonising disparate data have become a reality over the past year, thanks to artificial intelligence and technological breakthroughs. A key component of these approaches that is resonating with customers is “data as a product.”
It’s about applying the principles of product management to data, asking questions about what kind of cases we are solving with the data, what it will be used for and by whom. It highlights the importance of data quality, governance and usability for end users. Data as a product is evolving to become the foundation for all forms of analytics and artificial intelligence.
Read more: With new BI tool Lavena tracks millions in sales on five continents [case study]
The concept of treating data as a valuable asset or product means that it can be surfaced in a catalogue, used for a variety of internal purposes and even become a tradable commodity. The goal is to monetize the data as a product outside the organization.
More and more platforms are emerging where validated data can be refined, bought, sold and traded, and those who own it can be rewarded.
The launch of “GPTs” by OpenAI is a major milestone and a definitive turning point as there is an app store approach for contextualized AI applications, with a revenue sharing model.
Further development of this approach will be to enrich it with additional data. This will encourage organisations to use their own data to further train the ChatGPT models, which can then be monetised.
In the future, such exchanges will serve as verified sources on which large language models can use sanctioned data and distribute compensation for access, similar to the way the music industry has done with streaming services. The more the data product is used, the more valuable it is.
If data quality was important before, it is significantly more key in a world with generative artificial intelligence. While we have solved the volume and velocity cases, we are still working on data diversity. We need to move from big data to more reliable data, which also requires solving validity and value issues.
Your company’s data and metadata is a valuable asset. Considering their validity and value will ensure that you can use and act on them effectively to enable artificial intelligence.
Our path to the promised future of generative AI depends on the quality of the data used for this technology. If data is consistently and thoroughly checked for provenance and quality, it can be turned into a product. Then, the more your data is used for AI, the more valuable it will be.
We will see the evolution of better data becoming a commodity and a tradable commodity. Data capital will become important and will be the foundation of all innovations using generative AI.
This is the last moment before AI is deployed in all aspects of work. Now is not the time to be complacent or you will be left behind. Generative AI will change the world to the same extent that the internet and, in part, business software did.
There are challenges, but by taking the right actions, overcoming the obstacles will usher in an era of unprecedented innovation and prosperity.
If this seems overwhelming, remember that you don’t have to go it alone. And you shouldn’t. Work with competent partners who can turn your big data into better, more reliable data so you too can realize the value of generative AI.
At Balkan Services we have expert knowledge of business, technology and legislation, and we speak all three languages. We will listen carefully and advise you on choosing the right business system for your needs.
Balkan Services has been implementing software solutions for businesses since 2006 and has completed more than 690 projects, of which more than 420 are in the field of BI systems. We follow a proven implementation methodology with clear steps and know-how on best practices.
Source: “Bridging the Trust Gap in Generative AI“, Qlik