Category : AI & Automation

AI & Automation

Generative AI investments: how to estimate funding for GenAI projects

generative AI investments

In a Jan 2024 survey by Everest Group, 68% of CIOs pointed out budget concerns as a major hurdle in kickstarting or scaling their generative AI investments. Just like estimating costs for legacy software, getting the budget right is crucial for generative AI projects. Misjudging estimates can lead to significant time loss and complications with resource management.

Before diving in, it’s essential to ask: Is it worth making generative AI investments now, despite the risks and the ever-changing landscape, or should we wait? 

Simple answer: Decide based on risk and the ease of implementation. It’s evident that generative AI is going to disrupt numerous industries. This technology isn’t just about doing things faster; it’s about opening new doors in product development, customer engagement, and internal operations. When we speak with tech leaders, they tell us about the number of use cases pitched by their teams. However, identifying the most promising generative AI idea to pursue can be a maze in itself. 

This blog presents a practical approach to estimating the cost of generative AI projects. We’ll walk you through picking the right use cases, LLM providers, pricing models and calculations. The goal is to guide you through the GenAI journey from dream to reality. 

Choosing Large Language Models (LLMs) 

When selecting an LLM, the main concern is budget. LLMs can be quite expensive, so choosing one that fits your budget is essential. One factor to consider is the number of parameters in the LLM. Why does this matter? Well, the number of parameters provides an estimate of both the cost and the speed of the model’s performance. Generally, more parameters mean higher costs and slower processing times. However, it’s important to note that a model’s speed and performance are influenced by various factors beyond just the number of parameters. However, for this article’s purpose, consider that it provides a basic estimate of what a model can do.  

Types of LLMs 

There are three main types of LLMs: encoder-only, encoder-decoder, and decoder-only. 

  1. Encoder-only model: This model only uses an encoder, which takes in and classifies input text. It was primarily trained to predict missing or “masked” words within the text and for next sentence prediction. 
  2. Encoder-decoder model: These models first encode the input text (like encoder-only models) and then generate or decode a response based on the now encoded inputs. They can be used for text generation and comprehension tasks, making them useful for translation. 
  3. Decoder-only model: These models are used solely to generate the next word or token based on a given prompt. They are simpler to train and are best suited for text-generation tasks. Models like GPT, Mistral, and LLaMa fall into this category. Typically, if your project involves generating text, decoder-only models are your best bet. 

Our implementation approach 

At Robosoft, we’ve developed an approach to solving client problems. We carefully choose models tailored to the use case, considering users, their needs, and how to shape interactions. Then, we create a benchmark, including cost estimates. We compare four or five models, analyze the results, and select the top one or two that stand out. Afterward, we fine-tune the chosen model to match clients’ preferences. It’s a complex process, not simple math, but we use data to understand and solve the problem. 

 generative AI investments

Where to start? 

Start with smaller, low-risk projects that help your team learn or boost productivity. Generative AI relies heavily on good data quality and diversity. So, strengthen your data infrastructure by kicking off smaller projects now, ensuring readiness for bigger AI tasks later.


Generative AI investments

In a recent Gartner survey of over 2,500 executives, 38% reported that their primary goal for investing in generative AI is to enhance customer experience and retention. Following this, 26% aimed for revenue growth, 17% focused on cost optimization, and 7% prioritized business continuity. 

Generative AI investmentsBegin with these kinds of smaller projects. It will help you get your feet wet with generative AI while keeping risks low and setting you up for bigger things in the future. 

Different methods of implementing GenAI 

There are several methods for implementing GenAI, including RAG, Zero Shot, One Shot, and Fine Tuning. These are effective strategies that can be applied independently or combined to enhance LLM performance based on task specifics, data availability, and resources. Consider them as essential tools in your toolkit. Depending on the specific problem you’re tackling, you can select the most fitting method for the task at hand. 

  • Zero shot and One shot: These are prompt engineering approaches. The zero-shot approach involves the model making predictions without prior examples or training on the specific task, suitable for simple, general tasks relying on pre-trained knowledge. One Shot involves the model learning from a single example or prompt before making predictions, which is ideal for tasks where a single example can significantly improve performance. 
  • Fine tuning: This approach further trains the model on a specific dataset to adapt it to a particular task. It is necessary for complex tasks requiring domain-specific knowledge or high accuracy. Fine tuning incurs higher costs due to the need for additional computational power and training tokens. 
  • RAG (Retrieval-Augmented Generation): RAG links LLMs with external knowledge sources, combining the retrieval of relevant documents or data with the model’s generation capabilities. This approach is ideal for tasks requiring up-to-date information or integration with large datasets. RAG implementation typically incurs higher costs due to the combined expenses of LLM usage, embedding models, vector databases, and compute power. 

Key factors affecting generative AI investments (Annexure-1)

  • Human Resources: Costs associated with salaries for AI researchers, data scientists, engineers, and project managers. 
  • Technology and Infrastructure: Expenses for hardware (GPUs, servers), software licensing, and cloud services. 
  • Data: Costs for acquiring data, as well as storing and processing large datasets. 
  • Development and Testing: Prototyping and testing expenses, including model development and validation. 
  • Deployment: Integration costs for implementing AI solutions with existing systems and ongoing maintenance. 
  • Indirect costs: Legal and compliance and marketing and sales. 

Elements of LLMs

LLM pricing  

Once you choose the implementation method, you must decide LLM service (refer table 1 below) and then work on prompt engineering — that’s part of software engineering. 

Commercial GenAI products work on a pay-as-you-go basis, but it’s tricky to predict their usage. When building new products and platforms, especially in the early stages of new technologies, it’s risky to rely on just one provider. 

For example, if your app serves thousands of users every day, your cloud computing bill can skyrocket. Instead, we can achieve similar or better results using a mix of smaller, more efficient models at lower cost. We can train and fine-tune these models to perform specific tasks, which can be more cost-effective for niche applications.  Generative AI providers and costing 2024In the above table 1, “model accuracy” estimates are not included because they differ based on scenarios and cannot be quantified. Also note that the cost may vary. This is the current (as of July 2024) cost listed on the provider’s website. 

Generative AI pricing based on the implementation scenario 

Let’s consider typical pricing for the GPT-4 model for the below use cases. 

Here are some assumptions: 

  • We’re only dealing with English. 
  • Each token is counted as 4 letters. 
  • Input: $0.03 per 1,000 tokens 
  • Output: $0.06 per 1,000 tokens 

Use case calculations – Resume builder 

When a candidate generates a resume using AI, the system collects basic information about work and qualifications, which equates to roughly 150 input tokens (about 30 lines of text). The output, including candidate details and work history, is typically around 300 tokens. This forms the basis for the input and output token calculations in the example below.

GenAI use case resume builder

Let’s break down the cost. 

Total Input Tokens: 

  • 150 tokens per interaction 
  • 10,000 interactions per month 
  • Total Input Tokens = 150 tokens * 10,000 interactions = 1,500,000 tokens 

Total Output Tokens: 

  • 300 tokens per interaction 
  • 10,000 interactions per month 
  • Total Output Tokens = 300 tokens * 10,000 interactions = 3,000,000 tokens 

Input Cost: 

  • Cost per 1,000 input tokens = $0.03 
  • Total Input Cost = 1,500,000 tokens / 1,000 * $0.03 = $45 

Output Cost: 

  • Cost per 1,000 output tokens = $0.06 
  • Total Output Cost = 3,000,000 tokens / 1,000 * $0.06 = $180 

Total Monthly Cost: 

Total Cost = Input Cost + Output Cost = $45 + $180 = $225 

How to calculate generative AI cost ROI

RAG implementation cost  

Retrieval Augmented Generation (RAG) is a powerful AI framework that integrates information retrieval with a foundational LLM to generate text. In the case of resume builder use case, RAG retrieves relevant data based on the latest information without the need for retraining or fine-tuning. By leveraging RAG, we can ensure the generated resumes are accurate and up-to-date, significantly enhancing the quality of responses. 

Generative AI RAG based cost 

Fine tuning cost

It involves adjusting a pre-trained AI model to better fit specific tasks or datasets, which requires additional computational power and training tokens, increasing overall costs. For example, if we fine-tune the Resume Builder model to better understand industry-specific terminology or unique resume formats, this process will demand more resources and time compared to using the base model. Therefore, we are not including the cost for this use case.

Summary of estimating generative AI cost 

To calculate the actual cost, follow these steps: 

  1. Define use case: E.g. Resume builder
  2. Check cost of LLM service: Refer to table 1. 
  3. Check RAG implementation cost: Refer table 3.
  4. Combine costs: LLM service, RAG cost, and calculate additional costs (Annexure-1) such as hardware, software licensing, development and other services. 

The rough estimate would be somewhere between $150,000 to $2,50,000. These are just the ballpark figures. The costs may vary depending on your needs, LLM service, location, and market condition. It’s advisable to talk to our GenAI experts for a precise estimate. Also, keep an eye on the prices of hardware and cloud services because they keep updating. 

You can check out some of our successful enterprise projects here. 

GenAI reducing data analytics cost

At Robosoft, we believe in data democratization—making information and data insights available to everyone in an organization, regardless of their technical skills. A recent survey shows that 32% of organizations already use generative AI for analytics. We’ve developed self-service business intelligence (BI) solutions and AI-based augmented analytics tools for big players in retail, healthcare, BFSI, Edtech, and media and entertainment. With generative AI, you can also lower data analytics costs by avoiding the need to train AI models from the ground up.

Image source: Gartner (How your Data & Analytics function using GenAI) 

Conclusion

Generative AI investments aren’t just about quick financial gains; they require a solid data foundation. Deploying generative AI with poor or biased data can lead to more than just inaccurate results. For instance, if a company uses biased data in its hiring process, say gender or race, it could discriminate against certain people. In a resume-builder scenario, this biased data might incorrectly label a user, damaging a company’s reputation, causing compliance issues, and raising concerns among investors.

While we write this article, a lot is changing. Our knowledge about generative AI and what it can do might differ. However, our intent of providing value to customers and driving change prevails.

Read More
AI & Automation AR/VR

new test

Robosoft awarded with ISO 9001:2015 and ISO/IEC 27001:2022

In 2013, Chip and Joanna Gaines launched ‘Fixer Upper’ a home restoration show which not only went on to become hugely popular but is also credited with creating the ‘farmhouse chic’ genre of TV shows. The franchise made Waco, Texas famous by flipping many homes in the area and making it a tourist attraction. The Silo District in Downtown Waco with its sprawling retail and food outlets attracted 2.7 million tourists in 2018, up from 650,000 in 2014.

After several seasons, the show had its finale in 2017. Soon, they announced the Launch of Magnolia Network – which took three years in the making.bnhbkjh

 

Your excellence and brilliance have always made for a stunning performance at work. Thanks for bringing so much permanence to our company.

 

IAOP award for RobosoftRobosoft awarded with ISO 9001:2015 and ISO/IEC 27001:2022

Read More
AI & Automation

Why the Google Gemini Launch Matters

On December 7, Google announced the launch of Gemini, its highly anticipated new multi-modal AI architecture, including a Nano version optimized for hand-held devices. The announcement was greeted with mixed reviews.

Some users expressed doubts about the claims made by Google or whether the Gemini product was significantly better than GPT-4. Quoting an AI scientist who goes simply by the name “Milind,” Marketing Interactive suggested that Google is playing catch up at this point and that OpenAI and Microsoft might be ahead by six months to a year in bringing their AI models to market.

There was also plenty of public handwringing about a promotional video by Google featuring a blue rubber duck because the demo had been professionally edited after it was recorded.

Despite the tempest in a teapot about the little blue rubber duck, we believe the announcement is essential and deserves our full attention.

Decoding Gemini: How Parameters Shape Its Capabilities

Parameters are, roughly speaking, an index to how capable an AI might be. GPT 4.0 was built on 1.75 trillion parameters.

We don’t know how many parameters were used to build Gemini. Still, Ray Fernandez at Technopedia estimated that Google used between 30 and 65 trillion parameters to make Gemini, which, according to SemiAnalysis, would equate to an architecture that might be between 5 and 20x more potent than GPT-4.

Beyond the model’s power, there are at least four points of differentiation for Gemini.

#1. Multi-modal Architecture: Gemini uses multi-modal architecture from the ground up, unlike the competing architectures, which have text, images, video, and code in separate silos, which forces other companies to roll out those capabilities one by one, complicating the ability for them to work together in an optimum way.

#2. Massive Multitask Language Understanding: Gemini scored higher than its competition on 30 out of 32 third-party benchmarks. On some of those, they were only slightly ahead, and on others, more, but overall, that’s an imposing win-loss record.

In particular, Gemini recorded an essential milestone by outscoring human experts on a tough test called Massive Multitask Language Understanding (MMLU). Gemini scored 90.04% versus a human expert performance, which scored 89.8%, according to the benchmark authors.

#3. Alpha Code2 Capabilities: Simultaneously with the launch of Gemini, Google also launched Alpha Code2, a new, more advanced coding capability that now ranks within the top 15% of entrants on the Codeforces competitive programming platform. That ranking represents a significant improvement over its state-of-the-art predecessor, which previously ranked in the top 50% on that platform.

#4. Nano LLM model: Also simultaneous with the launch of Gemini was the Nano LLM model, which is optimized to run on a handheld device, bringing many of Gemini’s capabilities to edge devices like handheld phones and wearables. For now, that’s a unique advantage for Gemini.

points of differentiation for Google Gemini

What are the practical implications of Gemini Nano on a handheld device?

Companies like Robosoft Technologies that build apps will collaborate with clients to test the boundaries of what Nano can do for end users using edge devices like cell phones.

Edge computing emphasizes processing data closer to where it is generated, reducing latency and dependence on centralized servers, and cell phones will undoubtedly be first in line to benefit from Nano because they can perform tasks like image recognition, voice processing, and various types of computations on the device itself.

What about Wearables or other Types of Edge Devices?

Google hasn’t said whether Nano can run on wearables or other edge devices, but its design and capabilities suggest it probably can.

First, Nano is a significantly slimmed-down version of the full Gemini AI model, making it resource-efficient and potentially suitable for devices with limited computational power, like wearables.

Also, Nano is designed explicitly for on-device tasks. It doesn’t require constant Internet connectivity, making it ideal for applications where data privacy and offline functionality are crucial — both are relevant for wearables.

In particular, we noticed that Google’s December 2023 “feature drop” for Pixel 8 Pro showcased a couple of on-device features powered by Nano, including “Summarize” in the Recorder app and “Smart Reply” in Gboard. In our opinion, these capabilities could easily translate to wearables.

What about Apple Technology?

There’s no official indication that Nano is compatible with Apple technology. We think such compatibility is unlikely because Google primarily focuses on Android and its ecosystem.

However, the future of AI development is increasingly open-source and collaborative, so it’s possible that partnerships or independent efforts by members of the AI ecosystem — including companies like Robosoft Technologies — could lead to compatibility between Gemini Nano and Apple devices.

Enterprise-Level Use Cases for Gemini Pro

From what we know so far, Gemini Pro offers good potential to enable or enhance various enterprise-level applications. Here are some critical use cases that we think are most likely to be among the first wave of projects using Gemini Pro.

Customer Service and Workflows

  • Dynamically updating answers to FAQs
  • Helping with troubleshooting
  • Routing questions to the appropriate resources
  • Extracting and summarizing information from documents, forms, and datasets
  • Filling in templates
  • Maintaining databases
  • Generating routine reports

Personalization and Recommendations

  • Creating personalized marketing messages and recommendations
  • Optimizing pricing
  • Automating risk assessments
  • Streamlining loan applications
  • Providing personalized health treatment plans
  • Recommending preventive health measures

Business Process Optimization

  • Identifying process delays
  • Optimizing resource allocation
  • Streamlining decision-making processes with improved information flow
  • Identify cost savings opportunities

Security and Fraud Detection

  • Identifying potential cyber-attacks
  • Identifying malicious code and protecting sensitive data
  • Analyzing financial data for suspicious activity to help prevent losses

Content Moderation and Safety

  • Moderating user comments and posts on social media, including forum discussions
  • Improving the correct identification of spam

Above all, a very foundational use for Google Gemini Pro might be to enable the implementation of an enterprise-level generative AI copilot.

What is an Enterprise-Level Generative AI Copilot?

A generative AI copilot is an advanced artificial intelligence system designed to collaboratively assist and augment human users in various tasks, leveraging productive capabilities to contribute actively to the creative and decision-making processes. This type of technology is customized for specific enterprise applications, learning from user interactions and context to provide tailored support. It goes beyond conventional AI assistants by actively generating real-time suggestions, solutions, or content. It fosters a symbiotic relationship with users to enhance productivity, creativity, and problem-solving within organizational workflows.

Why might Gemini Pro be a good platform for building a generative AI copilot?

We think that Gemini Pro should be considered a possible platform for building a copilot. Its capabilities and characteristics align well with the requirements of such a system.

First, Gemini Pro can process and generate human language effectively, enabling it to understand user intent and respond coherently and informally. It has a knowledge base built on 40 trillion tokens, equivalent to having access to millions of books. It can reason about information, allowing it to provide relevant and insightful assistance to users.

Also, like other generative AI platforms, Gemini Pro can adapt its responses and behavior based on the context of a conversation, helping to ensure that its assistance remains relevant and helpful.

So that’s a good foundation.

Upon such a foundation, Google relies on the partners in its ecosystem to build an overall solution that addresses enterprise needs. These include ensuring that their data is secure. That information inside their enterprise is not used to train public models, control access to the data based on job roles and other factors, help with data integration, and build an excellent user interface. These are examples of areas where technology partners like Robosoft Technologies make all the difference when bringing an AI-based solution to life within an enterprise.

Read More
AI & Automation AR/VR

How emerging technologies can enable the ‘next gen’ of education

In 2021, online learning platform Coursera reported 20 million new learners in the year, equal to the total growth of the three years prior. The COVID-19 pandemic triggered an exponential jump in the already upward trajectory of online learning. Work from home, virtual classrooms, and time to pursue learning new skills saw the US record the highest growth in online learning with more than 17 million registered learners followed by India, Mexico, Brazil, and China.

Amid the devastation caused by the pandemic, governments, teachers, students and corporates benefited by accelerating digitalization efforts. For sure, today’s generation of digitally native learners lapped up this transition by educators. And although initially resistant, teachers discovered digital tools to be welcome assistants while managing schedules, keeping parents included, and doling out and marking assignments. The forced adoption of digital technologies accompanied by wider access to smartphones, made online learning accessible and affordable to larger masses globally. The shift to remote working also saw more professionals sign up on digital learning platforms to upskill and keep pace with the evolving demands of the workplace.

With the global EdTech and Smart Classroom market size expected to reach US$ 259.07 billion by 2028, the future outlook for eLearning platforms and EdTech is bright. Touted to be the mainstay of education in the future, smart classrooms will rely on a wide range of teaching tools and technologies to assist the learning experience end to end. Companies, on their part, already heavily invest their learning budgets in online resources for their workforces. A lot depends, however, on how much EdTech companies invest in the right set of technologies that fulfil the expectations of educators and learners. Their solutions must help the teaching community reduce the burden of administration and deliver affordable, quality education.

The top use cases of emerging technologies that will redefine education and the learning journey include:

Modern learning is student centric. It’s about each student getting to choose what and how to learn, anytime/anywhere, at their own pace, receiving personalized feedback, and accessing tailored recommendations based on their interests, capabilities etc. With the education sector finally on board with digitalization, EdTech offers a delightful range of possibilities to make learning experiences student-centric.

Surgent CPA Review, for example, is an AI-driven, adaptive learning exam prep course. Its proprietary algorithm evaluates performance on questions, student learning styles, exam date available study hours etc. to produce tailor-made study plans. Prodigy is an educational math game that’s becoming popular globally because it can customize content that allows for different learning styles to address specific areas that pose learning difficulties.

Edtech as a teaching assistant

Technology that enables adaptive teaching and learning experiences plays a critical role as it can deliver personalized, updated content that is focused on the unique needs and abilities of each learner. It can also assist teachers across all levels of education. AI supported by machine learning can be used to automate daily administrative tasks like grading/assessments, plagiarism checks, report generation thus freeing up time for teachers and trainers to focus on improving core aspects of their course content and teaching methods . For instance, LEAD’s app for teachers comes with customized curriculum, consistent lesson plans across all partner schools, and a handy AI-driven system automatically generating assignment status updates and assessment reports.

Georgia State uses Jill Watson, a human-like yet affordable AI assistant to respond to student queries round-the-clock. An elementary school in New Jersey uses an AI-based teaching assistant to help teachers figure out problematic areas of learning mathematics and fine-tune learning methods for each young learner.

Learning companions to improve the inclusiveness of education

Assistive technology is increasing in acceptance as educators are able to extend the learning experience to students who are unable to attend regular classroom sessions. Those with special needs require simpler, easy access to educational content and personalized monitoring because of certain developmental challenges. For example, robots are helping preschoolers with autism practice non-verbal communications skills. The biggest advantage offered by these robots is that they can engage each student with the kind of individual attention and assistance required to help ease their learning journey.

Research is also being conducted to use Artificial Intelligence (AI) for improving learning for those with visual and auditory challenges. For example, the National Technical Institute for the Deaf, housed at the Rochester Institute of Technology, has developed an app that turns speech into text to help deaf/hearing impaired persons interact more easily. This was in response to the communication barriers that came up for persons with hearing difficulties when face masks became compulsory during the pandemic.

[su_youtube url=”https://youtu.be/QuKIWUEd_w4″]

ASL TigerChat explained

Another use case of AI that can be a game changer in special education is detecting patterns in large amounts of data and applying these insights to identify and define certain disabilities like dyslexia with greater accuracy.

New immersive experiences shaped by technology

Research suggests that learners retain knowledge better when they are taught using multiple modalities and delivery methods. Both formats are likely to coexist in the future.

With video becoming a popular means of consuming content, digital devices and broadcast technologies finally have an opportunity to converge. OTT platforms and 5G connectivity in combination can deliver higher quality video at reliable speeds. Through the possibilities unlocked by live streaming in 4K and 360-degree videos, learners will be able to consume educational content of their choice at an enhanced level of immersiveness and engagement in multiple formats and modes.

Augmented Reality (AR) can help medical interns fully immerse themselves in training and practice, via virtualization, of a complex surgical procedure without putting any lives at risk or incurring huge expenses in the real world. NASA teaches budding astronauts how to take a walk on Mars employing visuals generated through AR. The Metaverse too will enable close to real-life experiences, a safe way to simulate learning experiences until a desired outcome has been achieved. It provides another dimension to educational storytelling and gamification to make learning more fun and engaging. For example, Arizona State University and Dreamscape Immersive, a VR entertainment and technology company have collaborated to create virtual zoology labs for an explorative learning approach inspired by the metaverse.

Gamifying education

As learners of every age are becoming more digitally savvy, gamification ensures engagement in a highly personal and interactive manner. Kindergarten can become more enjoyable with interactive games catering to young learners. Like Pearson’s interactive education app, which is brimming with images, videos, and interactive games at varying levels, difficulties, and types, to offer fully immersive learning, individualized experiences for children – each gets their own avatar and personalized learning journey. At the same time, teachers, and parents can track the child’s progress easily.

Traditional learning methods can be gamified and infused with elements of fun and healthy competition through interactive quizzes, dynamic leader boards, reward systems, badges to acknowledge and motivate learners. For example, Tinycards has gamified the flash card learning technique and made it more enjoyable. As the learner advances through the cards, their progress is tracked and earns them brownie points for every milestone achieved.

We are rapidly entering a future where education will find its place in a hybrid environment – the offline and online formats will coexist and support each other by bringing the best of their respective worlds. Rather than being seen as a makeshift alternative to physical/classroom learning, EdTech can potentially become the enabler of a robust and resilient system of education, acting as a multiplier to the current in-campus models. With the ability to extend the reach of education across geographies, reduce the burden on teachers, and include those sections who earlier did not have access to learning, the convergence of education and EdTech will see a new era emerge.

Read More
AI & Automation Mobile Technologies

6 Dating App Trends in 2023 – Right Swiping Technology to find Perfect Match

Since the dawn of time, pursuing their significant other has always been one of the life purposes of a living being. Be it humans, animals, birds, or mammals, all go through this natural order of mate selection to populate their species. While animals and birds and others usually fight it out to present themselves as the strongest candidate, things have become much easier in the case of humans – all thanks to technology and dating apps.

Cavemen and medieval men used to fight and duel over the approval of a woman. Nowadays a quirky bio and just a right swipe is enough.

The 90s saw a rise of matchmaking websites in India as well as globally with shaadi.com, bharatmatrimony.com, match.com, and others. They started out as preferred online medium to find suitable matches according to social compatibility like caste, culture, region, language education, etc. But very much like Netflix took over Napster, Tinder’s mobile first platform-based approach took over existing linear based models to become the popular choice of dating medium. Globally, Tinder was the highest grossing non-gaming app in 2017.

The online dating market showed no signs of slowing down during and after the pandemic and have been valued at US$12.37 billion in 2021. It is now expected to be worth US$28.36 billion by 2027. We are seeing an influx of dating apps such as Bumble, Hinge, Grindr, Hily, Clover, Plenty of Fish, etc. They all come with their own unique proposition of finding matches for their users. As the dating behaviors of users change with time, these apps adapt to these changes and provide what their users need.

Trend is in the app

Trends are nothing but a general direction of change in something. The biggest transition we can see in the dating scenario is that now people are being more selective of who they go out with.

A recent survey shows “61% of daters use an online dating app to meet people that shares common interests, 44% of daters use an online dating app to meet someone who shares their values and beliefs, and 42% of daters use an online dating app to meet someone for marriage”.

These numbers indicate the current mindset of people regarding their partner selection by the dating app.

The pandemic caused a lot of mental, emotional, and physical stress upon people. As a result, people had more time to reflect on their needs and priorities. The dating apps acknowledged their users’ priorities and introduced several technology-driven features to heed their needs. Below are some of the noticeable CX and behavioral trends among daters and dating apps-

#1 Let’s take it slow

While the pandemic forced people to stay inside, the dating apps didn’t suffer its consequences. In fact, research by Sensor Tower shows that dating app downloads grew 3% Y/Y in Q4 of 2020. The same research also indicates the average age for dating apps has steadily declined in recent years. The declining average age was more visible from the Q1 to Q3 of 2020.

Dating app average age of users during pandemic

Source: Sensor Tower

There are new dating terms that are making the rounds among young users from millennials and Gen Z – Dry Dating, Hesidating, Slow Dating. All terms coined due to the unwillingness of people to go all out with complete strangers.

According to Tinder’s CEO, Renate Nyborg, Gen Z consists of more than half its user base and they eventually want to take things slow in dating. Their idea of ideal dating scenario is different from millennials as they want to know their potential matches better before committing themselves romantically or meeting them. Tinder launched different intent-based swipe features for its users. They can now match by adding “Passions, Prompts and Vibes” to their respective profiles. All things helping matches to know each other better without any romantic expectations and then only take things further if “vibes match”.

#2 Discretion for safety reasons

Dating apps leveraged their digital capabilities to remain competitive in the times of full lockdown. As in-person meetings were not possible then, dating apps introduced in-app video call features for locked-in individuals. The nimbleness of dating apps to adopt to a change was one of the reasons their demand didn’t go down like other businesses.

But as people are using the video call features more and more, it raises the question of privacy and safety. This resulted in many dating apps now offering discreet video call features where users can video call with their blurred faces or silhouettes. The new video call feature also takes user’s permissions before connecting a call for increased discretion. Due to heightened safety concerns, many dating apps started taking different measures to address those. S’More defines itself as an “anti-superficial dating app” as it doesn’t straightaway reveal the image of its users. The profile image appears as blur initially and gets clearer as the conversation continues between matches.

Smore dating app

#3 Minimal efforts maximum gain

Almost all the freemium dating apps like Tinder offer a limited number of free swipes per day to their users. However, the introduction of AI based recommendations has increased the likelihood of users being hooked to the app. AI and ML learn from user’s personal data and preferences to ensure every match has the possibility to be “the one”.

There are over 300 million dating app users worldwide with about 20 million subscribed to one of their premium features. It creates an opportunity for dating apps to increase the likelihood of in-app purchases by offering more value or better matches to their users.

#4 Inclusivity for exclusivity

The huge popularity of value-driven, niche dating platforms in recent years have indicated a change from mindless swiping by global users. People now prefer quality over quantity and are looking for dating apps where they “truly belong”. There are already successful apps like Grindr catering to gay, bisexual and bi-curious men. Similarly, there are other niche dating apps that cater to either sexual preferences, hobbies, or interests of users.

Dig – for dog lovers is a niche dating app consisting of only dog loving users. It totally eliminates the concern of daters about what their match would think about their favorite pet.

Veggly is a dating app specifically for vegans and vegetarians.

Tastebuds is especially built for music lovers who match and can immediately start discussing their favorite artists, bands, etc.

BLK is a dating app for Black singles in the black community. It strives to create a warm, inviting, supportive, and inclusive space where Black love is celebrated and respected in all its forms.

Her dating app is specially built for lesbian, bi and queer community. The free version of the app lets you add friends, view profiles, start chats, view events, and join communities.

Other popular apps like Tinder, Hinge, Bumble have taken cues from this and redesigned their apps to included more sex orientation selections, more varied interests, suggestive bios to showcase on a profile.

#5 It’s a social thing now

One of the most difficult steps in online dating is the talking phase where you try to find common things to talk about. That’s when you feel the need for a friend to support you and guide you. Dating apps like Fourplay encourage people to form a tag team and team up with two more as they start messaging each other.

Thursday app helps skip the talking phase altogether and brings the online dating community directly offline. It hosts secret parties where only singles are allowed. The most likely scenario is a person would be taking along one of their single friends to these events and “socialize”. Ship (now discontinued) allowed users to become a matchmaker and find a suitable match for their friend. It also offered a group chat feature for better validation of the potential match.

Thursday dating app

#6 Gamification could be the key

Tinder’s “Swipe Night” was a huge success. It allows the user to solve a mystery based on game narration and first-person adventure. User’s choices allow to dictate the story and reveal different answers based on that. It then allows users to highlight their game answers in their respective Tinder bios. They are more likely to match with people who have similar answers and thinking patterns

Bumble introduced sets of recommended ice breaking questions to help matches get over the initial nervousness and start talking. Both users answer one of the chosen ice breaking questions and match their answers. Based on the answers they can carry forward their conversation.

Technology – the ultimate matchmaker in the digital era

Dating apps are indeed tech companies leveraging technology to offer social values. The fast adoption of newer technologies and digital transformation in every industry can be seen in dating apps as well. Dating apps are now taking advantage of cutting-edge software and technology such as AI/ML, VR, Metaverse to provide a whole new experience in dating to its users. Let’s take a deeper look at how these technologies are playing a matchmaking role in our dating lives –

AI/ML

Earlier Tinder used an ELO algorithm for matching profiles on the platform. It worked on a weightage system where users with most right swipes had a better probability of finding matches quicker. It then now moved away from this and now relies on a “dynamic system” that monitors the user behaviors on the platform through their swiping patterns and what’s on their profiles. Although not mentioned clearly, this dynamic system could be all but AI and ML deployed by Tinder for matching profiles.

“In a recent interview, Jennifer Flashman – Tinder’s director of analytics, explains that in leveraging AI to build better user experiences, it’s become clear to her that the future of dating will increasingly occur over texts and DMs rather than blind dates and phone calls. As this shift continues to accelerate, here are the top reasons she thinks companies are “swiping right” on AI in dating—and why other industries should be figuring out how to swipe right!”

AI and ML are already creating efficient and smart business processes in different industries. It has potential to transform the dating industry as well. The dating app Hinge employs machine learning as part of its algorithm by suggesting a “Most Compatible” match to its users.

5G

5G with its increased bandwidth, reliability, and speed has made it possible for dating apps to introduce more video-based features in their apps. Although the main beneficiary of 5G services is the OTT industry. But dating apps also can enjoy a few benefits of 5G. Dating apps now offer buffer-less video call, uninterrupted live streaming, Netflix party, etc. to their users for increased engagement. With time we can only imagine other benefits 5G and its subsequent updates may bring to the dating world.

Blockchain

The two founding principles of blockchain are full transparency and immutability. These two factors can play a major role in verifying user identities in dating apps while maintaining the option of privacy.

German company Hicky was one of the first to introduce blockchain based dating app back in 2018. It was built to ensure security and incentivize good behavior of its users.

Luna works on a tokenized dating system and incentivizes people to choose their contacts more carefully.

Ponder uses blockchain-based recommendation system and game mechanics in its app. It also offers financial rewards to motivate everyone to play matchmaker for their friends.

Ponder dating app

VR and Metaverse

The possibilities of Metaverse are endless for daters. It opens the gate to a whole new world of possibilities for them. Dating in metaverse framework depends on the idea of avatars, an advanced articulation of an individual. Nevermet strives to find matches for people in the Metaverse and VR. Dating applications with a metaverse framework depend on the idea of avatars, an advanced articulation of an individual.

Read more: Metaverse or MetaAverse – A Design Thinking Approach To Future Digital Ecosystems

Finding meaningful relationships in virtual platforms like online gaming is nothing new. But VR dating apps like Flirtual and Planet Theta provide the feeling of being physically present with others as well as bring a significant portion of body language into the mix.

VR technology enables the users to connect with their matches authentically in fantastical environments that are impossible to replicate in the real world. People can visit any location, go to any bar, play with unicorns, all on their first date.

Find your ‘lobster’

Contrary to popular beliefs, it’s the nerds that get the dates. Quite literally!

Being tech companies first, the top dating apps are always in an advantageous position to pivot and redefine their value proposition. They continuously vie for users’ attention and roll out new features whenever deemed necessary. Bumble and Hinge rolled out their new voice prompts while Tinder is working on a social mode called Swipe Party. Bumble also recently had its first acquisition in Fruitz – described promptly as a “Gen Z dating app”.

There is still a large untapped market out there waiting for something in their niche. Currently we have over 1500 dating apps or websites worldwide and with new apps quickly emerging, nothing is certain for established big players. There are plenty of fish in the sea if you have the right strategy in place to catch them. The dating industry is facing a real need to embrace innovation or get overshadowed by newer dating apps with fresher ideas and newer technology behind them. Dating apps have the impetus to improve their transparency and provide users with a more complete experience.

Read More
AI & Automation AR/VR Mobile Technologies

How the next gen of education can be enabled by emerging technologies

New technologies in EdTech

In 2021, online learning platform Coursera reported 20 million new learners in the year, equal to the total growth of the three years prior. The COVID-19 pandemic triggered an exponential jump in the already upward trajectory of online learning. Work from home, virtual classrooms, and time to pursue learning new skills saw the US recording the highest growth in online learning with more than 17 million registered learners followed by India, Mexico, Brazil, and China.

Amid the devastation caused by the pandemic, governments, teachers, students and corporates by benefited by accelerating digitalization efforts. For sure, today’s generation of digitally native learners lapped up this transition by educators. And although initially resistant, teachers discovered digital tools to be welcome assistants while managing schedules, keeping parents included, and doling out and marking assignments. The forced adoption of digital technologies, accompanied by wider access to smartphones, made online learning accessible and affordable to larger masses globally. The shift to remote working also saw more professionals sign up on Learning Management Solutions to upskill and keep pace with the evolving demands of the workplace, learning about emerging technologies, wellness and personal growth, and management behaviors.

With the global EdTech and Smart Classroom market size expected to reach US$ 259070 million by 2028, the future outlook for eLearning platforms and EdTech is certainly bright.

A lot depends, however, on how much EdTech companies and educators invest in the right set of technologies that fulfil the expectations of educators and learners. Whether in educational institutions or corporate learning, solutions must help the teaching community reduce the burden of administration and deliver affordable, quality education to their audience, which is increasingly relying on this format for their learning and training needs. Touted to be the mainstay of education in the future, smart classrooms will rely on a wide range of teaching tools and technologies to assist the learning experience end to end. Companies on their part, already heavily invest their learning budgets in online resources for their workforces.

With the education sector finally on board with digitalization, technology offers a delightful range of possibilities for EdTech to transform learning experiences.

The top five use cases of emerging technologies that will redefine education and the learning journey include:

1. Adaptive teaching that is human-centric

Learning is becoming more student centric with a growing preference for personalized experiences. While research suggests that modern-day learners prefer reading the more affordable and convenient digital form of their textbooks to the print version, Bay View Analytics research found that 43% of college faculty believe students retained knowledge better when learning from printed matter. Research also suggests that modern learners retain knowledge better when they are taught using multiple modalities and delivery methods.

The world of education has changed irrevocably, creating disparities in the teacher-learner dynamic. The role of the teacher has transformed too, becoming more significant – teachers are not supposed to simply pass on information but also required to function as facilitators of the learning journey. They are therefore, expected to switch modes to suit the student’s learning style and capacity. They also have to continuously monitor and assess the learner’s journey so as to customize and make the experience delightful and meaningful for their audience. Educators who have traditionally seen themselves as the controlling authority of educational material, now have to adapt their teaching mindsets to suit modern preferences and expectations of easy, inclusive accessibility.

Technology enabling adaptive teaching and learning experiences holds the key as it can deliver personalized, updated content that is focused on the unique needs and abilities of each learner. And the best thing is that adaptive learning works across all levels of education. Surgent CPA Review, for example, is an AI-driven, adaptive learning exam prep course. Its proprietary algorithm evaluates performance on questions, student learning styles, exam date available study hours etc. to produce tailor-made study plans. Prodigy is an educational math game that’s becoming popular globally because it can customize content that allows for different learning styles to address specific areas that pose learning difficulties. Room to Read, developed by Robosoft, is a leading non-profit organization based in California provides an interactive & feature-rich digital platform to foster a reading habit among children. Test Coach is another comprehensive online learning platform developed by Robosoft for students. It brings the best of both offline and online learning to the students by providing a seamless digital experience.

2. AI as a teaching assistant

Teachers bear a significant burden of administration, lesson planning, assignment grading, learner assessments and recommendations, reports and metrics on performance at an individual and group level. Artificial Intelligence (AI) lends itself to automation of certain daily administrative tasks like grading, report generation thus freeing up time for teachers and trainers to focus on improving core aspects of their course content and teaching methods.

AI supported by machine learning is used for customized content delivery, learning assessment, plagiarism checks, virtual assistance, multiple language support, and computer vision. AI tools like ElevateU help colleges assess student performance and decide on the content and format best suited for each student. Georgia State uses Jill Watson, a human-like yet affordable AI assistant to respond to student queries round-the-clock. An elementary school in New Jersey uses an AI-based teaching assistant to help teachers figure out problematic areas of learning mathematics and fine-tune learning methods for each young learner.

3. Learning companions to suit each learner’s pace

Assistive technology is increasing in acceptance as educators are able to extend the learning experience to students who are unable to attend regular classroom sessions. For example, those with special needs require simpler, easy access to educational content and personalized monitoring because of certain developmental challenges. Accounts of assistive technology like the one on robots helping preschoolers with autism practice non-verbal communications skills, have been making waves on the internet in recent years. The biggest advantage offered by these robots is that they can engage each student with the kind of individual attention and assistance required to help ease their learning journey.

AI can also play a valuable role in enhancing learning outcomes by identifying patterns in erroneous answers, areas of improvement in course material, and enabling individualized feedback messages relevant to a specific learner, which wouldn’t have been possible otherwise. Experts believe that AI can help provide feedback in alternative formats such as a video/audio message that may go down better with the recipient learner and help break down their resistance to consider criticism in a positive light.

4. Gamification and visualization of real-life situations

Augmented Reality (AR) can replace paper-based learning material as all that the learner requires is a smartphone. With a smartphone in almost every hand, it is much easier to create an immersive learning experience, for example, of plant life through a walk in the park. Smart classrooms that are more interactive, immersive and collaborative have also become readily available.

As learners of every age are becoming more digitally savvy, AR brings alive the visualization and ensures engagement through gamification in a highly personal and interactive manner. For example, medical interns can safely and fully immerse themselves in training and practice via the how-to virtualization of a complex surgical procedure without putting any lives at risk or incurring huge expenses in the real world. NASA teaches budding astronauts how to take a walk on Mars employing visuals generated through AR. At the other end of the spectrum, kindergarten can become more enjoyable with interactive games catering to young learners.

The Metaverse too will enable close to real-life experiences, a safe way to simulate learning experiences until a desired outcome has been achieved. Important and practical tasks such as performing advanced medical surgeries, conducting astrophysics experiments, visualization of a rocket launch etc. It provides another dimension to educational storytelling and gamification to make learning more fun and engaging. For example, Arizona State University and Dreamscape Immersive, a VR entertainment and technology company have collaborated to create virtual zoology labs for an explorative learning approach inspired by the metaverse.

5. Seamless consumption of multi-format, multi-genre content at the learner’s convenience

With video becoming a popular means of consuming content, digital devices, and broadcast technologies finally have an opportunity to converge. OTT platforms and 5G connectivity in combination can deliver higher quality video at reliable speeds. Live streaming in 4K, 360-degree videos, highly interactive experiences – the opportunities to generate an immersive learning experience are almost limitless.

We are rapidly entering a future where education will find its place in a hybrid environment – the offline and online formats will coexist and support each other by bringing the best of their respective worlds. Rather than being seen as a makeshift alternative to physical/classroom learning, EdTech can potentially become the enabler of a robust and resilient system of education, acting as a multiplier to the current in-campus models. With the ability to extend the reach of education across geographies, reduce the burden on teachers, and include those sections who earlier did not have access to learning, the convergence of education and EdTech will see a new era emerge.

For this to come about, EdTech needs to befriend emerging technologies such as OTT/5G, AI, AR/VR, metaverse, data analytics to enable seamless, enhanced learning experience while bringing more learners into its fold. This way, EdTech companies will also be able to move quickly to capitalize on new revenue streams that technology opens up as education settles into its next-gen avatar.

Read More
AI & Automation

Conversational AI breaks through user barriers – Designing a fulfilling conversation is key

Hey Alexa, what is conversational AI? If you’ve ever interacted with a virtual assistant like Siri, Alexa or Google Assistant, then you’ve experienced conversational Artificial Intelligence (AI). These game-changing automated messaging and speech-enabled applications have permeated every walk of life, creating human-like interactions between computers and humans. From checking your appointments and carrying out bank transactions, to tracking the status of your food or delivery order and learning the names of songs, conversational AI will soon be playing a lead role in your digital interactions.

So, how does Conversational AI work?

Users interact with conversational AI through text chats or voice. Simple FAQ chatbots require specific terms to derive responses from their knowledge bank. However, applications based on conversational AI are far more advanced – they can understand intent, provide responses in context, and learn and improve over time. While conversational AI is the umbrella term, there are underlying technologies such as Machine Learning (ML), Natural Language Processing (NLP), Natural Language Understanding (NLU) and Natural Language Generation (NLG) that enable text-based interactions. In the context of voice, additional technologies such as Automatic Speech Recognition (ASR) and text-to-speech software enable the computer to “talk” like a human.

Conversational AI process

Imagine you give a command to a conversational AI application to track your order. This input could either be spoken or text. If spoken, the ASR converts the spoken phrases into machine-readable language. Once converted by ASR, the application then moves into the NLP stage, where it first uses NLU to understand the context and intent of the message. Based on this, a response is formed through a dialogue management system and generated into an understandable format by NLG. The response is then either delivered in text, or in the case of voice, converted to speech through text-to-speech software. All this happens in a matter of seconds, to get the information you need about the status of your order.

Conversational AI will create a real and personal relationship between humans and technology

As our world becomes more digital, conversational AI can enable seamless communication between humans and machines, with interactions that are an integral part of daily life. Besides improved user engagement, conversational assistants allow round-the-clock business accessibility and reduce manual errors in sharing information. They reduce the dependency on people for multi-lingual support and enable inclusion by removing literacy barriers. The benefits and potential of conversational AI are inviting businesses and technology to make heavy investments in the space.

Sales, service and support have been early adopters of conversational AI, because of the structured nature of information exchange that these functions require. This has decreased query resolution times, reduced the dependence on human agents and provided the opportunity for 24/7 sales and service. The AI chatbots are even able to deliver recommendations on purchases based on personalized customer preferences. According to Gartner, chatbots and conversational agents will raise and resolve a billion service tickets by 2030.

Across sectors, conversational AI is transforming interactions between people and systems. The banking sector is banking on conversational AI to provide a superior experience through transactions such as providing balance information, paying bills, marketing offers and products and so on, all without human intervention. The insurance sector is using chatbots to help customers choose a policy, submit documents, handle customer queries, renew policies and more. The healthcare sector is using these chatbots to check patient symptoms, schedule appointments, maintain patients’ medical data, and share medication and routine check-up reminders. Automobiles are becoming a cockpit for personal AI assistants or in-car experiences.

Businesses are also using conversational AI to manage their own workforce and improve the employee experience. Through chatbots, they make vital information available to employees 24/7, reducing the need for human resources to manage queries and processes. The possibilities and opportunities with conversational AI are endless and use cases are available in every industry.

Overcoming user frustration with Conversational AI through better engineering and design

While there are several benefits to conversational AI, you might be familiar with many instances when the conversation ends in frustration. As AI technology evolves and matures, these challenges must be addressed at the design and engineering stage.

In terms of design, the success of the platform entirely hinges on user interface and experience. It must be easy to use, intuitive, and must fit seamlessly into the overall design of the application and customer journey. While UI is important, the conversation itself is the most critical aspect. It is important to ensure that the conversational design flows smoothly, follows well-tested and widely applicable patterns and has exception rules inbuilt into the script design.

The more human-like the conversation is, the better the user’s acceptance

  • Draw from real life – To design a fulfilling conversation, architects and UX designers must draw from real-life, and UX design principles. The product has to be designed for ease of use, ease of conversation and ease of resolution. The product has to be easily findable, accessible and usable to the user in the overall product ecosystem. This can be achieved by following time-tested UI and UX principles in developing visual or auditory experiences.
  • Build trust – To build trust in conversational AI, small talk or playful ways to engage with the AI can be built into the engagement.
  • Understand the target audience – Understanding the target audience and their needs is pivotal to the success of conversational AI. An in-depth study of the demographics helps in building a platform that is unbiased. Incorporating languages, accents and cultural nuances allows the user to relate better and enable smoother interactions.
  • Solve customer problems, not business problems – A deep understanding of the customer ensures that the conversation design is solving for the customer, rather than solving for the business problem. When the focus is on the business problem, the is a possibility of ignoring the human-like flow of interaction. Putting the customer first helps in building a valuable and desirable interface that is a win-win for both the customer and the business. It is also important to ask what the system will help resolve and design the conversation to ensure the most frequent use cases for the application are solved logically and seamlessly. Bad AI chatbot conversationExample of a bad AI chatbot interaction
  • Recover from lagging conversations – The AI bot must also have the ability to learn from mistakes, recover from broken conversations and redirect to human agents when conversations cannot be fulfilled through AI. This has to be designed seamlessly into the interface, ensuring the customers trust the system and come back to use it in the future.

Engineering can help provide human-like interaction

  • The systems have to be able to deal with noisy settings and decipher languages, dialects, accents, sarcasm, and slang that could influence intent in the conversation. Intense data training, larger varied datasets, language training and machine learning (ML) could solve these challenges as the technology matures.
  • Another concern with conversational AI is data privacy and protection. To gain user trust, security must be paramount and all regional privacy laws must be adhered to.
  • Backend integration of conversational AI platforms may decide their success or failure in the market. The platform must integrate with CRM, after-sales, ticketing, databases, analytics systems and so on, to get appropriate data for the user, and provide appropriate data to the business.
  • Finally, the AI system should be backed by analytics and data, so that data scientists have invaluable insights to continuously improve the system.

Conversational AI is growing at an incredible pace and at a massive scale. This is because of the immense possibility that conversational AI has to bridge the gap between humans and technology. There is vast demand also due to the efficiencies and cost savings that conversational AI can offer businesses with quick, accurate and effortless query resolution. Businesses across industries should leverage this technology of the future to deliver a consistent and superior user experience.

Read More