Digilah

Categories
AI Tech

AI needs more data- and it can’t get it from the supermarket, or the fridge

Written by Ritesh Kant on Digilah (Tech Thought Leadership).

Large Language models (abbreviated as LLMs) require enormous amounts of data for their training and retraining. Estimates suggest that Llama 3 was trained on a training set of 11 Trillion words, ChatGPT 4.0 in the meanwhile needed a paltry training set of 5 Trillion words !!

And that’s not all. Next generation models require data sets that are 10X larger… and so on.

While the possibilities with AI are infinite, we are hence heading towards finitism in the datasets that are needed to explore, and capitalize on, these infinite possibilities.

Why is data so important to AI?

Data is the oil for AI models. The reasons are well documented and can be summarized as follows:

  • Pattern Recognition: Machine learning and deep learning models rely on data to recognize and learn patterns, and then make predictions or decisions.
  • Training: Models use data to map inputs to outputs accurately, which is critical for tasks like classification, regression, and clustering.
  • Feature Learning: Data provides the features (variables) that the models need to learn from, identify features that are significant and their relationship to outcomes.
  • Performance Improvement: A large and diverse dataset helps the models learn a wide range of scenarios and variations, improving its ability to generalize.
  • Evaluation and Validation: Validation and test datasets are used to evaluate the models’ performance and ensure that it is not overfitting.
  • Bias Reduction: Adequate and representative data help in reducing biases in AI models.
  • Adaptation and Updating: Continuous data collection allows AI models to be updated and adapted, and hence continue to be relevant and accurate.

What are the current data sources?

If data is the oil for AI models, the current and known oil wells include the following:

  • The open data common crawl foundation: Consolidated from large scale web crawls, contains a data set of 25 trillion words, 55% of which is non-English. It is to be noted that these data sets are not de-duplicated.
  • Web data not captured by common crawl: Search engines such as Google/Bing, would have crawled a lot more data than common crawl. Much of this data would be long tail (restaurant menus for example) and not relevant for AI training. It is estimated that this could be 2 to 5 times more than the common crawl data set.
  • Academic publications and patent publications: Could probably add upto an additional 1 trillion words. It is to be noted however that much of it is PDF and requires OCR to extract text. Some of it is also behind paywalls.
  • Book archives such as Anna’s archive: Approximately 3 trillion words, most of which is PDF and behind paywalls/logins.

Can we do more to get more data?

Can we dig deeper to get more oil. Feasibly we can, however the law of diminishing returns catches up and a lot of what we would get, for example by more sophisticated web crawls will be long tail data which would not be relevant for AI models’ training.

Another solution is synthetic data. Synthetic data is artificially generated data that mimics real-world data, and is created using algorithms, simulations, or generative models. The challenges with synthetic data are the challenges of quality, validation and de-duplication.

There is hence a crying need for more oil/data. The immense possibilities of the AI industry is synergistic with this

Can data be created afresh – and how?

Can oil be created! In this case it very well can be. The treasure trove of oil, nay data , that AI companies are mining has been created by approximately 1% of the global internet populace. Global internet penetration cascaded from the more developed western world to the lesser developed regions over a period, hence the current data sets also suffer from biases, lack of representation and diversity.

The opportunity to create new data is immense. The global internet user base is approximately 5.4 billion. As a representation of scale of inherent knowledge that this global user base contains, a typical human being at the age of 20 has spoken 150 million words.

Estimates would suggest that the total number of words spoken daily, across languages and regions, is 115 trillion. Compensating for long tail irrelevance and duplication by a factor of 60%, we are still left with a useful super set of knowledge of 45-50 trillion words, daily.

This is the oil that feasibly needs to be created and then mined. The solution is to have a more significant portion of the worldwide internet populace to create this oil, nay data.

Incentivizing internet users to create data that AI models can use needs to be a gradual process that can leverage several levers, some of which are as follows:

  1. Financial Incentives in the form of monetary rewards, profit sharing models offering data/content creators a share of the AI models’ profits, data marketplaces where data/content creators can sell their data/content.
  2. Gamification in the form of points systems, leaderboards and badges, challenges and competitions.
  3. Exchange of value in terms of access to subscriptions, tickets, events etal.
  4. Recognition in the form of community building, recognising contributors and contributions, highlighting social impact, collaborative projects whereby contributors can see for themselves the results of their contributions.
  5. Partnerships and collaborations with academia, academic institutions, AI researchers and corporates (both profit and non profit) that are building AI models.
  6. Ensuring privacy of data and transparency and provenance on how the data/content contributions are being used.

This is a long road, but a mix and match of these approaches can create a compelling playing field for internet users to willingly and actively contribute their data. 

If the data/content so created covers diverse scenarios and populations, the downstream models are less likely to suffer from bias, be more representative and diverse, more performant in decisions and more likely to perform fairly across different groups.

The data/content creation road has been traveled however, most notably by social media platforms. The platforms that take up data/content creation for the significant cause of the AI revolution should inculcate some best principles from the social media evolution, encyclopedias such as Wikipedia and Fandom, Ask me anything platforms such as Quora along with web3 principles of incentivization and decentralization. We owe this much to all the possibilities inherent to AI.

References

  1. https://www.educatingsilicon.com/2024/05/09/how-much-llm-training-data-is-there-in-the-limit/#shadow-libraries
  2. https://x.com/mark_cummins?s=11&t=QSarIO-G0B2E9idaCl1HDA

Most asked questions

How many words are required to train present day LLMs?

Estimates suggest that Llama 3 was trained on a training set of 11 Trillion words, ChatGPT 4.0 needed a paltry training set of 5 Trillion words.

What is the average number of words we speak?

A typical human being at the age of 20 has spoken 150 million words.
Estimates suggest that the total number of words spoken daily, across languages and regions, is 115 trillion.

How many people use internet?

The global internet user base is approximately 5.4 billion.

Most searched queries

Large Language Model (LLM)

ChatGPT 4.0

Optical Character Recognition (OCR)

Hello readers! Hope you liked what you read today. Click the like button at the bottom of this page and share insights with your colleagues and friends!

For more such amazing thought leadership articles on technology follow Digilah people.

Categories
Decision Making Res Digi Res

From Theory to Reality: Data Structures and Algorithms enhancing life

Written by Sneha Rani on Digilah (Student Tech Research)

My name is Sneha Rani. Currently, I am pursuing B.Tech. in Electronics and Communication from the Indian Institute of Technology (BHU), Varanasi, India. I have keen interest in how large datasets are analyzed and transformed into meaningful results. The key of organisation and retrieval of data lies in data structures and algorithms. Now we are in a world where we have to use our energy to think of better solutions.

Data structures are everywhere!

In the current world where technology is embedded in our daily lives, the importance of data structures and algorithms cannot be doubted. Behind every app, website, and digital service is a large network of data structures and algorithms that are working day and night to make our lives more comfortable, efficient, and fun.

Indeed, the basic ideas of computer science such as the optimization of searching results and the powering of recommendation systems are the ones that are changing the way we interact with technology.

Introduction to DSA

The basic elements of computer science have changed the way we communicate with technology, thus, we could go beyond the possible results of physical labour and concentrate on problem solving which is more creative.


There are numerous data structures and algorithms such as the greedy approach, dynamic programming, graphs, trees, linked lists, arrays and lists, sorting and searching, just to name a few. Whatever you want to do, whether you want to optimize the solution or cut the cost, all can be done by data structures and algorithms in an efficient way.


Dijkstra and Bellman-Ford’s algorithms are designed to help you determine the shortest paths between two nodes while Floyd Warshall’s method is used to calculate the shortest path between each pair of vertices in a graph. Dynamic programming is a process that enables you to save the previous results and compare the new ones to discover the most efficient solution.


Trees can be employed to preserve the integrated structure of the data sets. Arrays give you the possibility to experiment with different dimensions which enables the processing of various kinds of data and operations. Linked lists facilitate you to use the storage efficiently and data can be stored dynamically.

The hashing can cut the search time by an exponential factor, thus, providing a good user experience. Stack and queue are the most beneficial data structures. Stack and queue are as straightforward as taking books from a pile and being in line at the ticket counter. The stack and the queue are used to solve many complex problems at once very easily.

With every dive into the world of data structures we discover its endless possibilities, we are intrigued by its complexities and intricacies, and we are drawn into its depth.

Let us consider some of the applications of data structures and algorithms:

Efficient Information Retrieval

Think of the situation of looking for information on the web without the support of good data structures and algorithms! It would be similar to looking for a needle in a haystack.

Through the use of data structures like hash tables, binary search trees, and algorithms like breadth-first search and depth-first search, search engines can quickly go through the huge amount of data to find the results which are relevant in milliseconds.


You can be looking for a nearby restaurant, researching a topic for a school project, or shopping for a new pair of shoes and data structures and algorithms will ensure that the information you need is in your hands in just a few keystrokes.

Personalized Recommendations

Have you ever realized that the ads on your favourite social media platform always seem to be perfectly tailored to your interests and that your Instagram and Tik-Tok feed feels so familiar? Or how streaming services fight for your viewing time by recommending movies and TV shows that match your viewing habits? 

Data structures are the basis of the suggestions given to you by your online shopping app.

The possibility of such a high degree of personalization is due to the use of advanced recommendation algorithms that take into account your past behaviour, preferences, and demographic information to suggest the content that you are likely to enjoy. 

Through the use of data structures such as graphs and algorithms like collaborative filtering and content-based filtering, technology firms can generate personalized experiences that make users to stay longer and come back for more.

Optimized Transportation and Navigation

Navigation apps have become vital for travelers who are going to their work, planning a trip, or exploring a new city. Behind the scenes, these apps use data structures like graphs and algorithms like Dijkstra’s shortest path algorithm to calculate the most effective routes, considering factors such as traffic jams, road closures, and real-time updates.


Through the optimization of the transportation routes, data structures and algorithms not only save time and fuel but also cut down on stress and make the whole travel experience a lot more pleasant.

Enhanced Communication and Collaboration

Nowadays, the world is more interconnected than ever, and communication and collaboration are key for both personal and the professional success. 

Instant messaging apps, email clients, and collaboration platforms use data structures, such as queues, stacks, and trees, and algorithms, such as sorting and searching, for the fast and efficient delivery of messages.


No matter if you are texting a friend, sending files to coworkers, or attending a virtual meeting, data structures and algorithms make communication and collaboration possible even if the people are in different time zones or across distances.

Let us now delve into some real-life cases where data structures and algorithms are being used:

Artificial Intelligence and Machine Learning

Artificial Intelligence and Machine Learning are the areas in which these intelligent technologies are predominantly being used.


AI and machine learning algorithms usually use special data structures that are designed for data representation and processing, thus making it easier to carry out tasks more quickly and accurately. 

For instance, decision trees are employed in classification tasks, while neural networks make use of complex graph-like structures to depict the relations between data points.

Databases

Databases are now everywhere in the present world, they are the engine that drives all from social media to financial systems. 

Behind the scenes, databases depend on the complex data structures like B-trees, hash tables, and indexes to store, retrieve, and manage the huge amounts of structured data efficiently.

File Systems

The file systems are the ones that are responsible for the organization and management of the files that are stored on the computers and the storage devices. 

Data structures such as linked lists, trees (like B-trees or binary trees), and hash tables are used to keep the file metadata, directory structures, and file locations on the disk.

Financial Systems

Financial systems handle huge amounts of transactional information and perform complex calculations. 

Data structures such as priority queues, hash tables, and trees are used to carry out financial instruments, visualize market trends, and improve trading strategies.

Computer Graphics and Gaming

Data structures are the key factor in computer graphics and gaming, which are used to model and manipulate objects, scenes and game states. 

For instance, spatial data structures such as octrees are being used for the collision detection and spatial partitioning.

Healthcare Systems

Healthcare systems keep the patient records, medical images, and the treatment plans using data structures like linked lists, trees, and hash tables. 

The data structures thus, are the tools that make the organization of patient data, the tracking of the medical histories and the communication between the healthcare providers possible.

Social networks

Social networking platforms are dealing with heaps of user data and connections between users. Graph data structures are used to create social network models, in which nodes are users and edges are the relationships.

Graphs are then utilised to the algorithms which are employed to recommend friends, detect the communities, and analyse the network behaviour.

Competitive Programming

Competitive programming is somewhat like a sport for computer programmers, where the participants are the ones who compete to solve algorithmic and computational problems within a given time frame. 

The primary goal of competitive programming is to write efficient and correct code to solve a variety of problems, usually with time limitations.

Competitive programming is a branch that deals with the best use of data structures and algorithms to solve the real world problems using the least resources for the best result. 

This calls for a lot of brainstorming. Competitively, data structures are the key to tackling problems in an efficient and effective manner.


They are the tools that enable the users to arrange and manage data in an efficient way. Participants have to not only comprehend the operation of these data structures but also determine when and where to use them to solve various problems.

Conclusion

Through the process of data structures and algorithms, information retrieval is simplified, and personalized experiences are introduced to the real world which is helpful in our daily lives.

Using the basic principles of computer science, the developers and engineers can come up with ingenious solutions that make our life easy, convenient, and more enjoyable. 

The ever-changing technology will make the relevance of data structures and algorithms increase even more, thus, leading to the advancements and changing the way we communicate with the world around us.

The central point is that data structures and algorithms are the basic elements of our digital society and thus they are the tools we use to overcome the modern world complexities with confidence and ease. 

By adopting data structures and algorithms, we can open new doors, trigger innovation and thus, build the future that the next generations will live in.

References

Data Structures Using C And C++ by Y. Langsam, M. Augenstein And A. M. Tenenbaum

https://www.geeksforgeeks.org/learn-data-structures-and-algorithms-dsa-tutorial

https://www.geeksforgeeks.org/real-time-application-of-data-structures

https://iq.opengenus.org/applications-of-different-data-structures/#google_vignette

Image Sources

https://media.licdn.com/dms/image/D5612AQGyFWT40Onbmw/article-cover_image-shrink_720_1280/0/1712594897366?e=2147483647&v=beta&t=gHkL2IwhBMfNqTy6t2uReBVcBrGvhPcuUY47AoWmJRo

https://files.realpython.com/media/How-to-Implement-A-Queue-in-Python_Watermarked.993460fe2ffc.jpg

https://media.geeksforgeeks.org/wp-content/cdn-uploads/20191004160106/How-to-Prepare-for-Competitive-Programming.png

https://files.realpython.com/media/TOML-in-Python_Watermarked.1bca2ba00140.jpg

https://www.researchgate.net/publication/279474409/figure/fig2/AS:669385706438664@1536605397857/An-illustration-of-ITS-ITS-include-all-types-of-communications-in-and-between-vehicles.ppm

Most asked questions

Which data structures are used for non-recursive implementation of programs?

Stack and queue are used to solve many complex problems at once very easily. They are the keys to implement non-recursive solutions of programs.

Which data structures are helpful in visualizing market trends?

Data structures such as priority queues, hash tables, and trees are used to carry out financial instruments, visualize market trends, and improve trading strategies.

Most searched queries

Collaborative filtering

Decision trees

Machine learning

Hello readers! Hope you liked what you read today. Click the like button at the bottom of this page and share insights with your colleagues and friends!

For more such amazing articles and research on technology follow Digilah industry leaders and students researchers.

Categories
Mar Tech AI Tech

Transforming Marketing using Generative AI

Written by Shivani Koul on Digilah (Tech Thought Leadership).

As marketers, we fundamentally learn about the 4 Ps of Marketing: Product, Price, Place, and Promotion. Later, three more Ps were added: People, Process, and Physical evidence. Whatever we do revolves around the customer.

AI will disrupt all the Ps in many ways going forward, and many times there is an argument: will that take jobs? My assumption is that it will take non-productive jobs and give space to marketers to work on something newer, something smarter.

This is the role of technology! To assist humans in making the best possible product, but never forgetting that “human touch”. Marketing is all about creativity, empathy, and understanding human behavior. Successful marketing needs originality and a creative spark only humans can possess.

Having said that, a 2020 Deloitte global survey of early AI adopters showed that three of the top five AI objectives were marketing-oriented: enhancing existing products and services, creating new products and services, and enhancing relationships with customers.

The main job of any marketer is understanding the need of the customer, matching it with the offerings like product and services, and persuading the customer to buy the product and service. 

This looks simply, a 3-step process, but there are a lot of critical steps & information involved in-between where marketers must analyze a lot of data and tweak the marketing strategy accordingly.

Let’s discuss it here with some real-time examples and see how marketers are using AI to support them.

1. Content Marketing –

Currently, AI can help in customizing the marketing content like product information, email writing, blogs, marketing messages, and copywriting. All this can be done using open AI tools available and should work on simple prompts.

With machine learning, this data can be used further for creating sales pitch, cross-selling pitch, customer engagement, last-minute deal or offer by considering different variables like demographics of the target consumer, behavior, demographic, along with deep analysis of the impact of communication. Example AI tools.

2. Data Analytics –

Slicing and dicing of data was done earlier as well, but AI has taken it to the next level where one can get predictive analysis and prompts/suggestions to enhance the content/marketing campaign. 

Customer data like preferences, engagement, status as in which ladder of purchase funnel the customer and CRM, and that gives space to marketers for redefining the market strategy. Example CRM tools.

3. Search Engine Optimization –

This has been a game-changer with real-time examples like Google, Netflix, and other search engines. This will help segmentation of customers and suggest target advertising. This has also helped marketers to leverage cross-functional platforms.

So, if you search for some product on Amazon and open any social media like Instagram or Facebook, you will see recommendations of the same or similar products on that platform as well. By doing this, marketers can enhance the recall value of products or services.

4. Placement of advertisement –

This has been very important: where to post an advertisement for the best ROI. AI has made it easy to target or place advertisements based on consumer data like purchase history, preference, and context of purchase. 

Example Google/YouTube advertisement.

5. E-commerce and Digital Marketing –

AI has been widely used by e-commerce websites and digital marketing to reach out to the right customers, understand their needs and buying patterns, and automate marketing workflows and course correction of marketing efforts which otherwise would have taken a lot of resources. Example any AI-enabled fitness app.

AI can be a game-changer in many ways, but Human decision-making is typically reserved for the most consequential questions, such as whether to continue a campaign or to approve expensive TV ads.

Training your algorithm enough that it should give expected results, training algorithms with correct prompts, and above all, safety and security of data.

Data privacy is going to be of utmost importance for AI. Clear and transparent policies need to be drafted against data security and privacy.

I believe AI is like a child which needs to be trained to become a responsible assistant; hence humans have a greater role as to how they are raising this child to serve mankind. 

As marketers continue to embrace AI technologies, they must strike a balance between innovation and ethical considerations, leveraging AI’s capabilities to enhance customer experiences while upholding trust and transparency.

References:

https://hbr.org/2021/07/how-to-design-an-ai-marketing-strategy

Most asked questions

What are the 4 P’s of marketing?

4 P’s of Marketing: Product, Price, Place, and Promotion. Later, three more P’s were added: People, Process, and Physical evidence. 

What is the key to successful marketing?

Successful marketing needs originality and a creative spark.

How is machine learning serving the marketing industry?

With machine learning, data is used for creating sales pitches, cross-selling pitches, customer engagements, last-minute deals, or offers along with deep analysis of the impact of communication.

Most searched queries

Machine learning

CRM (Customer Relationship Management)

SEO (Search Engine Optimization)

ROI (Return on Investment)

Hello readers! Hope you liked what you read today. Click the like button at the bottom of this page and share insights with your colleagues and friends!

For more such amazing thought leadership articles on technology follow Digilah people.

Categories
Ad Tech

First Party Data is the King

Written by Stafaniya Radzivonik on Digilah (Tech Thought Leadership).

Nowadays we’ve been hearing repeatedly about the upcoming cookieless and ID-less world in the ad tech industry. However, what does it actually mean and how is it going to change our online activities? Let’s get to the roots of it.

User Identification

Any user can be identified, recognized and tracked within an online environment. Third-party cookies (3P cookies), device IDs and sophisticated IDs such as IDFA, AAID, IDFV serve the key role in that mission. They help advertisers to target the right audiences and deliver relevant ads according to user’s preferences and interests. Though, to what extent it’s allowed — under the question mark.

The implementation of CCPA, LGPD, GDPR, TCF v.2 as well as an attempt to unite them under GPP alongside LAT introduced by Apple and Google in 2012 brought to the table the notion of user’s consent where users can opt-in or opt-out and adjust the data shared at any time.   

Cookieless and ID-less world

Following these data privacy restrictions, Apple released a new App Tracking Transparency (ATT) framework in 2021 where fingerprinting is prohibited giving full control to users. In turn, Google announced a deprecation of 3P cookies in Chrome by the end of 2023. Given that, the future of cookies and IDs is a foregone conclusion.

The upcoming cookieless and ID-less world puts marketers in need of exploring ways to run successful campaigns without user IDs as well as challenges ad monetization strategies of publishers. It becomes necessary to adopt a portfolio of alternative approaches and solutions to target and serve relevant ads without clear identifiers.

Options available to advertisers and publishers

Privacy Sandbox Proposal

Topic is rooted in Federated Learning of Cohorts (FLoC) and designed to support interest-based advertising. It facilitates more privacy for consumers on the web by means of analyzing online activity (behavior) of users within the browser without any cookies. It determines up to 350 cohort-based topics per user that could be adjusted as matching or sensitive and stored up to one week.

First-Party Data

Leveraging and storing the first-party data (1P data) collected by the publisher across all applicable devices (websites, apps, smart TVs) within the customer data platform (CDP) unveils the possibility to consolidate all the touch-points with the audience as well as to build a coherent profile of each user required for more targeted campaigns.                                                         

Universal IDs

A universal ID, a single identifier assigned to each user, allows passing anonymized information about that user to the approved partners. There are 1P data-based (LiveRamp, ID5, etc.), proprietary (TTD, Stroer, Criteo, etc.), and industry IDs. Though it’s widely tested, a universal ID requires email addresses of users which collection could be definitely challenging.  

Contextual Advertising

Contextual advertising is based on keywords retrieved from the page content where topic-based targeting grants advertisers control over ad placement as well as ensures brand safety. This is an effective way to show relevant ads without collecting 3P data.

Smaato’s holistic approach

At Smaato(Now part of Verve Group), we’ve been building data cohorts and audiences on the basis of 1P data from in-house gaming studios, 2P data shared by our web, in-app and CTV publishers, and 3P data received from data providers and shaping it by means of advanced targeting options (geo segments and geo-fencing, behavioral, contextual and privacy targeting).

Mostly, it’s been centered around our contextual ad technology. This includes Moments.AI™, which focuses on real-time delivery to the freshest URLs and most relevant content. Our contextual toolbox also contains ATOM (anonymized targeting on mobile), a pioneering privacy-first targeting product based on AI algorithms.

Alongside data and technology, we’ve been actively measuring performance via CPM, CTR, and VCR where personal data is not needed.

Thus, even with our strict adherence to and adoption of all data regulations as well as bracing for cookies and IDs to crumble, we’ve been seeing a high potential to deliver effective omnichannel campaigns for marketers as well as to add value to the ad monetization of publishers.

Most searched questions

What are cookies?

What are the alternatives if cookies are removed?

Most searched queries

Cookies

Cookieless world

ID

Hello readers! Hope you liked what you read today. Click the like button at the bottom of this page and share insights with your colleagues and friends!

For more such amazing thought leadership articles on technology follow Digilah people.

Categories
Security Tech

Access control in the new normal

Written by Manish Dalal on Digilah (Tech Thought Leadership)

Security risks have become a de facto part of everyday business life, but in the race to plug in gaps created by technology itself, physical security threats should not be ignored. Two years of working/studying/shopping from home have inured many of us to the risks stemming from the conventional physical security measures. But the threat still exists and now includes health risks too.

In the aftermath of pandemic, as organizations reopen their doors to staff and visitors, it’s important to remember that a significant number of people caught the virus from outside or from family members who went out—to work, play, shop, etc. This danger continues to lurk; and will even after the virus becomes endemic (hopefully soon). This means that measures that require contact—fingerprint readers, card readers, keypad readers for instance—are vulnerable, at best.

But beyond worries about contracting the virus through surface contact, there is a pressing need for a more seamless process to vet and permit entry into the workplace. Ideally such solutions should be:

  • Contactless
  • Optimized
  • Seamless
  • Allow screening of visitors for identification as well as concealed contraband items

Solutions that integrate all of the above will offer benefits through higher levels of security, manpower cost savings, time savings and analytics that can provide actionable business intelligence.

It goes without saying that the data obtained in the course of tech-driven access management should be thoroughly protected by multi-layered security. This is not just to placate the woke crowd but to instill confidence in the business itself.

Biometrics has a major role to play in enabling these solutions. At ZKTeco we recognize this and our Safe2Greet solution is an effort to meet all the expectations of the customers highlighted above.

It incorporates a number of our patent pending technologies to create a complete entrance/access control solution that starts by having visitors pre-register their information via a digital invitation sent to their mobile phones and check them in using various hardware options like a self check-in kiosk or a facial recognition reader. On submitting this information, a QR code is generated and sent to the visitor. On scanning this QR code at the entrance kiosk, factors such as body temperature, mask compliance will be verified.

Once this is done successfully the visitor can proceed to the turnstile, where same the QR code can grant access as well. Cronus, turnstile with built-in metal detector also screens for concealed metal objects—an unobtrusive way to avoid violence, as well as deter pilferage—at exit points. The data collected in the process is secured through high-level security that includes encryption and multi-step access verification.

Safe2Greet avoids physical contact, reduces manpower dependence, and raises the levels of health and safety. Biometric driven solutions like Safe2Greet are not the future, they are available now and they’re here to stay.

Categories
Decision Making Tech

Better than Before: Making sense of data in an age of information overload

Written by Ira Gilani Lal  on Digilah (Tech Thought Leadership)

In a 2016 Harvard Business Review article, Scott Anthony shared some insights from a study on S&P 500 companies:

  • 61-year tenure for average firm in 1958 narrowed to 25 years in 1980 – to 18 years in 2012
  • At current churn rate, 75% of the S&P 500 companies will be replaced by 2027

Business leaders commonly refer to the military acronym VUCA (Volatility, Uncertainty, Complexity, Ambiguity) to describe the world today. The external environment is changing at a rapid pace and companies cannot afford to be caught off guard. How can companies continue to thrive, in this ever-changing external environment? While there are several challenges, and there are also plenty of opportunities. Deep-rooted assumptions hold us back from unlocking this hidden potential.

Today’s information and digital systems are capable of providing a huge amount of data at the click of a button. Most organizations measure a large number of metrics for each business unit, division, department, employee level etc. The underlying assumption is that the more we measure, better we are! Most senior executives are quite familiar with their local measurements (e.g. tons, units produced, order book, number of subscribers etc.) but are ignorant of the overall financial measurements. 

Everyone in the company should understand financials; it is not just for Accounts or Finance function. In most organizations, the top management team does not have a good understanding of Free Cash Flow. In his book, Conspiracy of Fools, Kurt Eichenwald writes that in 2001, just a month before the collapse of Enron, its chairman Kenneth Lay, CEO Jeffery Skilling, and CFO Andrew Fastow did not know that Enron would run out of cash in a matter of weeks!

Dr. Eli Goldratt, author of the best-selling book The Goal, repeatedly emphasized that “Measurements Drive Behavior!”. The purpose of measurements is to take decisions for corrective actions. At the organization level, a few simple parameters are good enough. Timely data and corrective actions can help individuals to connect the dots and see the big picture.

Most companies review performance monthly. This leads to a significant time lag in getting key data or MIS. We recommend a weekly review mechanism with focus on 3-5 key metrics. The objective of the review is only to take decisions for corrective action. The weekly report should be simple and accurate, leaving no room for analysis paralysis, and facilitating effective decision-making.

Increasing digitization of data across the organization has been a key enabler for running the weekly reviews successfully. Companies that have adapted this methodology, provide a very high degree of focus on getting the reports right first time, as soon as the week ends. Many companies have integrated their digital systems (based on ERP such as SAP, Oracle, Tally, Zoho) and provide simple excel based reports and dashboards which can be accessed across devices such as mobile phones or tablets.

During the last two years of the pandemic, there have been lot of uncertainties in supply chain. Moving to a digitally enabled model has allowed these companies to be extremely nimble and agile in their decision making. Several companies have pivoted their business model quickly in order to capitalize on the emerging opportunities in the market. These decisions have been backed by analysis of marketing trends using simple AI and ML based algorithms, dynamic decision making matrix and partnerships across the digital ecosystem.

Technology acceleration has also helped some companies to take specific actions to address business challenges posed by the pandemic. For e.g. to deal with the disruption in logistics, companies have invested in GPS based end to end tracking systems. In manufacturing businesses, use of IOT based sensors has picked up significantly to collect data, and share timely alerts for predictive maintenance.

At Goldratt India, we have been working with Indian companies for over 23 years to help them increase their sales, profit and cash flow by an order of magnitude. Weekly reviews have been the cornerstone of all our engagements. Companies have been able to achieve quantum improvement in performance, just by changing a few metrics and review processes. Some of our learnings are  encapsulated below:

  1. Measure performance weekly instead of monthly
  2. Don’t get stuck in analysis paralysis, focus on corrective actions only
  3. Instead of chasing benchmarks or budgets, always strive to “Better than Before” with respect to own past performance
  4. Monitor plan vs. actual every week: The more our planning improves, the gap between plan vs. actual reduces
  5. Better than Before: Each week, strive to improve upon past 13 week moving average, irrespective of the external environment

Our client JSPL has been practicing these principles for over 5 years and is well on its way to becoming a debt free company. The company has reduced debt by over Rs 25000 crores in the last 4 years.

Short video from the case study presented at TOCICO international conference in USA:

In conversation with Mr. Naveen Jindal, Chairman, JSPL