How AI is helping build, and humanize, virtual worlds

Raw Text

Playing video games, it’s easy to forget how much calculation is actually running in the background to produce what you’re seeing. Everything from a game’s physics to its lighting and terrain is the result of some carefully programmed computation — gaming companies can be some of the most complex software companies out there.

It’s no wonder, then, that gaming and artificial intelligence have such a long history. Dungeon-crawling games like 1980’s Rogue and 1996’s Diablo were already using AI to generate unique maps through what’s called procedural generation. Conversely, games created by and for humans have been repeatedly used as benchmarks of progress in AI — the respective victories of DeepMind’s AlphaGo in Go and AlphaStar in StarCraft II were landmark events in that regard. This relationship has only gotten closer in recent years, as game developers started applying new breakthroughs in deep learning to more and more aspects of their workflow and gameplay.

In our view, two particular areas of AI-enabled creativity are showing great promise: content generation on the one hand, and character intelligence on the other hand. Within each segment, we’re seeing companies position themselves differently across the creative spectrum, by focusing either on the visuals — what the player is seeing — or the logic — the underlying rules — of a game. If we were to map the current landscape, it could look something like the graph below:

Source: BITKRAFT analysis

In this piece, we’ll explore how developers can use AI to enhance their creative workflow and to generate not just smarter, but more human characters and worlds.

AI as a creation tool

AAA game development is notoriously hard . The need for both artistic and technical skills and the joint effort of dozens to hundreds of creative talent for years on end make it an equally time-, compute-, and capital-intensive process . This is increasingly true as each new generation of hardware raises players’ expectations, imposing new costs on developers. With ballooning team sizes and production times, the average cost of development is estimated to have roughly doubled with each new generation of consoles.

Since manpower accounts for a majority of these costs, we think AI has the potential to solve many of the hurdles developers face today . Specifically, we envision benefits across four main areas:

Productivity: Increasing consumer demand for AAA games has put pressure on studios to deliver results under tight deadlines. This has led to dreaded periods of crunch across the industry, with serious consequences on the well-being of artists and developers. Automating tedious tasks with AI could help time-pressed teams do more while focusing on the more creative aspects of the work.

Scalability: The industry’s move toward a game-as-a-service model has turned games into living destinations, with open-world titles now expected to constantly generate new territory to explore — Hello Games’ No Man’s Sky , for example, claims it is able to produce over 18 quintillion, or 18 x 10^18, unique planets . Work of this magnitude can reasonably only be achieved through procedural generation.

Consistency: AAA games today are often developed simultaneously across territories, with different studios for example handling a game’s terrains, characters, and sound design. This form of geographic distribution can fragment a company’s knowledge base, making it hard to enforce a unified look and feel. AI-assisted tools implemented across individuals, departments, and studios, could help maintain the same creative principles across everyone’s workflow.

Serendipity: Conversely, an individual or a group of individuals could use AI to introduce some degree of chance, as a way to fight against creative dead ends and uniformity. Far from imposing standardization, AI then would serve as an agent of change, for example suggesting textures, assets, or color palettes that a designer might not have considered on their own.

An opportunity space for specialized AI tools

Leveraging AI to power up their development workflow along these axes — either on an ad hoc basis or at a more structural level — will enable studios to better keep up with players’ expectations. We believe industry demand for these capabilities is set to grow rapidly in the coming years, opening up an exciting opportunity space for specialized companies.

One such example is Anything World , whose technology lets 3D creators use their voice to produce elaborate 3D scenes. The company combines AI with speech recognition to turn a user’s voice commands into 3D output, pulling corresponding assets from third-party libraries like Sketchfab. The company claims developers are able to build interactive 3D experiences up to 40% faster as a result.

Source: Anything World on YouTube

Another contender can be found in Promethean AI , which lets game artists automatically generate stylistically consistent 3D scenes. Using semantic awareness — the ability to understand the broader visual context that a designer is working with —, Promethean can suggest 3D assets that match the look and feel of a scene, and place them appropriately in that environment. For a kid’s room, for example, Promethean might suggest a teddy bear or a toy box. Picking up clues in the scene, it would also make sure the room looks untidy, and add shadows and reflections to match the artist’s style and existing lighting conditions.

Lowering barriers to creativity through UI

Both Anything World and Promethean AI point to the same potential: a future where AI not just accelerates but enhances creativity. If they succeed, creatives will be better equipped to face tomorrow’s consumer demand for immersive media, and to develop and scale ever richer worlds.

Both companies also understand that technology alone is not a cure-all. If AI tools are to help more than just professionals, pioneers in the space will need to make them accessible from day 1. Doing so could be good for business, too, as the companies that can package complex features into intuitive interfaces will be aptly positioned to capture the market for next-gen creative software.

If the current landscape is any indication, approaches will vary. With voice, Anything World is putting the onus not on the technical know-how, but on the clarity of the instructions. This lowers the barriers to creation, enabling even beginners to design in 3D. Meanwhile, Promethean’s visual browser focuses on automating semantic connections across eclectic content libraries. This lets 3D creatives produce art without having to navigate obscure folder architectures and metadata.

Source: Promethean AI via 80LV

We are excited to see companies continue to push creativity forward with faster, more intuitive, and more efficient interfaces.

Balancing control and transparency

Although we see intuitiveness as key to breaking into the mainstream, AI companies shouldn’t turn into black boxes just to reduce friction. In fact, we think they should aim to be as transparent as possible on their inner workings , for two reasons: trust, and creative control.

Trust, we expect, will be a prerequisite for any company dabbling with AI-enabled creativity. After all, professionals may be understandably wary of any technology that they think could, ultimately, make them obsolete. Specialized tools should look to augment creators, not replace them, and make those intentions clear if they want to attract customers in the first place.

Then comes creative control. On that front, advanced users are likely to want to get deeper into the nuts and bolts of the technologies they use on a day-to-day basis so they can customize the output of their AI to fit their specific needs.

Accordingly, and while automation is desirable, we think AI-enabled content creation should maintain human-in-the-loop capabilities , whereby a human operator can certify, and potentially course-correct, the adequacy of an AI’s work — for example, Promethan’s users can override a semantic connection between two assets if they consider it irrelevant. In turn, empowering users with more granular oversight, rather than one-size-fits-all output, should lead to greater trust, too.

AI as a humanization tool

Every gamer has memories of a particular NPC that visibly lacked even the most basic social skills, movement coordination, or spatial awareness. Over the years, these moments have not only made for valuable memes but also served as cautionary tales for developers that any sign of unnatural behavior should be avoided at all costs in the characters they design.

Fortunately, such shortcomings are growing scarcer every year. Recent developments in machine learning are enabling storytellers to generate increasingly convincing character behaviors , bringing new depth to how we interact with them inside our favorite virtual worlds. Now, AI-first companies aim to redefine what virtual beings can be.

Why AI-enabled characters are coming to the forefront

As mentioned, the dream for smart, or smarter, characters isn’t exactly new.

Developers and players alike have consistently done their best to avoid crossing the infamous “uncanny valley” — a hypothesized relationship between the human-like appearance of an object and our emotional response to it. Stemming from robotics, this concept describes the sense of unease or revulsion some observers can feel when facing humanoid robots that don’t feel quite human. It was then quickly extended to gaming, where it still acts as a guiding principle for designing and assessing character animation, dialogue, and behavior.

Source: Wikipedia

That developers have historically aimed for more realism makes sense. Lifeless interactions not only hurt immersion, but indirectly impact the game experience: if you know the characters around you have only a few canned replies to offer as conversation, why would you spend time engaging with them at all? Lacking NPCs therefore lessen the player’s desire to explore and experiment, to the detriment of play.

In recent months, AI-enabled characters have seen renewed interest across the industry. Technical feasibility aside, we believe a number of factors are contributing to this trend.

Recurrence : Long and frequent play sessions in persistent worlds mean you’re likely to encounter some characters repeatedly. To make sure these interactions are always entertaining, informational, or both, with repeat potential, developers are looking to move beyond static dialogue trees and toward more adaptive conversational and behavioral systems.

Scalability : As virtual worlds get bigger and more complex, they are being filled with hundreds to thousands of NPCs. While not all those characters will have the same significance, it’s important that they all offer players at least an acceptable level of interaction — call it a MVI, or Minimum Viable Interaction. While scripting so many dialogues and personalities manually would be impractical, AI can help scale up that process.

Personalization : With ubiquitous recommendation algorithms, consumer apps from Netflix to Spotify provide a constantly upgraded consumption experience. As we continue to spend more time inside virtual worlds, these expectations will likely apply to their content and narratives, too. Developers looking to improve engagement will find value in AI-driven personalization, from dynamically balancing game difficulty to adapting a companion NPC’s playstyle to the player’s.

Consistency : While the success of “low-fi” games and platforms like Minecraft and Roblox can’t be overstated, the overall trend with each new generation of gaming hardware is one of increased photorealism across all dimensions, including graphics, physics, and lighting. With the visual aspect of games getting more and more lifelike, it’s only logical that characters would act accordingly.

It’s notable that this list features both player-centric (recurrence, personalization, consistency) and developer-centric drivers (scalability). Together, they’re fostering AI adoption and leading to exciting new use cases for interactive characters.

Beyond smarts: personalities

While the focus to date has been on behavioral intelligence, the scope and potential of character realism have dramatically expanded: AI-first companies now aim to make their characters not just smarter, but more human too , with memories, personalities, and feelings of their own. This opens up opportunities for both AI infrastructure providers and content creators, with one enabling more of the other.

A good example is Inworld AI , a developer platform for creating AI-powered virtual characters to populate immersive realities including games and VR/AR — as of last week, Inworld AI is a BITKRAFT portfolio company . The company’s technology covers all dimensions of an AI’s capabilities, including its perception, cognition, and behavior. This enables the resulting characters to fulfill a range of functions to “advise, guide, support, [and] entertain” users for consumer and enterprise use cases alike.

Source: Inworld AI

As companies like Inworld AI start to handle the heavy lifting of designing and training AI characters, more creators will be able to leverage these technologies to create complex stories at scale. For example, beingAI is developing what it calls a “family” of AI characters all interconnected in Zbee World , an original transmedia story franchise.

Importantly, AI-driven creative companies can take the long-term view, iterating on their protagonists based on audience feedback or serendipitous findings. Fable Studio ‘s Wizard Engine lets users generate an AI character’s life content on an ongoing basis, using a backstory and synopsis as initial input; from there, it’s able to generate content across voice, animation, text dialogue, and video. With the ability to refine and compound specific character traits over time, storytellers will be encouraged to adopt ever more fluid development workflows.

Beyond interactions: relationships

Yet generating distinct, complex personalities is only one aspect of the bigger opportunity for AI beings. Though it makes for potentially richer interactions, it still relies on a one-to-many model whereby all users are essentially engaging with the same centrally-crafted AI brain.

Instead, developers can now choose to go even more granular and enable their protagonists to adapt to players at the individual level for a fully personalized experience. For example, synthetic media startup Replika allows users to create their own “ compassionate AI friend ” that’s “always here to listen and talk.” As of October 2021, its app had 10 million registered users sending more than 100 million messages each week — each of them interacting with a truly unique companion.

With time, we anticipate that consumers will begin to more closely bond with a select few virtual beings and expect them to travel, or “live,” across media, devices, and platforms. To prepare for this future, developers may want to consider as many sources of input as possible. For example, leveraging specific AI bricks enables Replika companions to respond to text, using both retrieval-based and generative dialogue models; voice, through speech recognition and synthesis; and vision, with face & person recognition and what’s called Visual Question Generation. By functioning equally in all contexts, AI companions will be able to prolong user interaction, and to become even more empathetic and customized as a result.

Source: Replika

Our take

We at BITKRAFT believe the future of AI-enabled worldbuilding is rich with opportunities.

On the one hand, AI creation tools are ushering a new era of creativity for the studios forward-looking enough to harness, rather than fear, them. With players’ appetite and expectations for virtual worlds only growing, we expect demand for this kind of software to surge in the coming years. Rather than compete solely on technology, the companies pioneering the space today should aim to build with consumers in mind from inception. On that front, we see intuitive interfaces, transparency, and granular control over the output as clear differentiators.

On the other hand, recent breakthroughs in machine learning mean AI can be leveraged to make characters, and ultimately entire worlds, more intelligent and empathetic. As virtual beings become imbued with memories, personalities, and even feelings, the places they inhabit are set to grow ever richer and more complex. Rather than just “interact” with these AIs, consumers will inevitably form more durable and intimate relationships with them.

We are excited to see AI-native companies continue to push back the frontiers of interactive entertainment. If you’re building in this space, we would love to hear from you !

Disclosures:

The mention of any companies in this website post is for information purposes only and does not constitute an offer to sell or a solicitation of an offer to buy any interests in any of the companies listed, or any other securities.

The information contained herein may include, or be based in part on, articles, information, and other data supplied by third parties, which has not been verified by BITKRAFT. This information should not be relied upon for the purpose of investing in any of these companies or for any other purpose.  Past investment results or performance of any sectors, industries, and/or companies listed should not be viewed as indicative of future performance.

Single Line Text

Playing video games, it’s easy to forget how much calculation is actually running in the background to produce what you’re seeing. Everything from a game’s physics to its lighting and terrain is the result of some carefully programmed computation — gaming companies can be some of the most complex software companies out there. It’s no wonder, then, that gaming and artificial intelligence have such a long history. Dungeon-crawling games like 1980’s Rogue and 1996’s Diablo were already using AI to generate unique maps through what’s called procedural generation. Conversely, games created by and for humans have been repeatedly used as benchmarks of progress in AI — the respective victories of DeepMind’s AlphaGo in Go and AlphaStar in StarCraft II were landmark events in that regard. This relationship has only gotten closer in recent years, as game developers started applying new breakthroughs in deep learning to more and more aspects of their workflow and gameplay. In our view, two particular areas of AI-enabled creativity are showing great promise: content generation on the one hand, and character intelligence on the other hand. Within each segment, we’re seeing companies position themselves differently across the creative spectrum, by focusing either on the visuals — what the player is seeing — or the logic — the underlying rules — of a game. If we were to map the current landscape, it could look something like the graph below: Source: BITKRAFT analysis. In this piece, we’ll explore how developers can use AI to enhance their creative workflow and to generate not just smarter, but more human characters and worlds. AI as a creation tool. AAA game development is notoriously hard . The need for both artistic and technical skills and the joint effort of dozens to hundreds of creative talent for years on end make it an equally time-, compute-, and capital-intensive process . This is increasingly true as each new generation of hardware raises players’ expectations, imposing new costs on developers. With ballooning team sizes and production times, the average cost of development is estimated to have roughly doubled with each new generation of consoles. Since manpower accounts for a majority of these costs, we think AI has the potential to solve many of the hurdles developers face today . Specifically, we envision benefits across four main areas: Productivity: Increasing consumer demand for AAA games has put pressure on studios to deliver results under tight deadlines. This has led to dreaded periods of crunch across the industry, with serious consequences on the well-being of artists and developers. Automating tedious tasks with AI could help time-pressed teams do more while focusing on the more creative aspects of the work. Scalability: The industry’s move toward a game-as-a-service model has turned games into living destinations, with open-world titles now expected to constantly generate new territory to explore — Hello Games’ No Man’s Sky , for example, claims it is able to produce over 18 quintillion, or 18 x 10^18, unique planets . Work of this magnitude can reasonably only be achieved through procedural generation. Consistency: AAA games today are often developed simultaneously across territories, with different studios for example handling a game’s terrains, characters, and sound design. This form of geographic distribution can fragment a company’s knowledge base, making it hard to enforce a unified look and feel. AI-assisted tools implemented across individuals, departments, and studios, could help maintain the same creative principles across everyone’s workflow. Serendipity: Conversely, an individual or a group of individuals could use AI to introduce some degree of chance, as a way to fight against creative dead ends and uniformity. Far from imposing standardization, AI then would serve as an agent of change, for example suggesting textures, assets, or color palettes that a designer might not have considered on their own. An opportunity space for specialized AI tools. Leveraging AI to power up their development workflow along these axes — either on an ad hoc basis or at a more structural level — will enable studios to better keep up with players’ expectations. We believe industry demand for these capabilities is set to grow rapidly in the coming years, opening up an exciting opportunity space for specialized companies. One such example is Anything World , whose technology lets 3D creators use their voice to produce elaborate 3D scenes. The company combines AI with speech recognition to turn a user’s voice commands into 3D output, pulling corresponding assets from third-party libraries like Sketchfab. The company claims developers are able to build interactive 3D experiences up to 40% faster as a result. Source: Anything World on YouTube. Another contender can be found in Promethean AI , which lets game artists automatically generate stylistically consistent 3D scenes. Using semantic awareness — the ability to understand the broader visual context that a designer is working with —, Promethean can suggest 3D assets that match the look and feel of a scene, and place them appropriately in that environment. For a kid’s room, for example, Promethean might suggest a teddy bear or a toy box. Picking up clues in the scene, it would also make sure the room looks untidy, and add shadows and reflections to match the artist’s style and existing lighting conditions. Lowering barriers to creativity through UI. Both Anything World and Promethean AI point to the same potential: a future where AI not just accelerates but enhances creativity. If they succeed, creatives will be better equipped to face tomorrow’s consumer demand for immersive media, and to develop and scale ever richer worlds. Both companies also understand that technology alone is not a cure-all. If AI tools are to help more than just professionals, pioneers in the space will need to make them accessible from day 1. Doing so could be good for business, too, as the companies that can package complex features into intuitive interfaces will be aptly positioned to capture the market for next-gen creative software. If the current landscape is any indication, approaches will vary. With voice, Anything World is putting the onus not on the technical know-how, but on the clarity of the instructions. This lowers the barriers to creation, enabling even beginners to design in 3D. Meanwhile, Promethean’s visual browser focuses on automating semantic connections across eclectic content libraries. This lets 3D creatives produce art without having to navigate obscure folder architectures and metadata. Source: Promethean AI via 80LV. We are excited to see companies continue to push creativity forward with faster, more intuitive, and more efficient interfaces. Balancing control and transparency. Although we see intuitiveness as key to breaking into the mainstream, AI companies shouldn’t turn into black boxes just to reduce friction. In fact, we think they should aim to be as transparent as possible on their inner workings , for two reasons: trust, and creative control. Trust, we expect, will be a prerequisite for any company dabbling with AI-enabled creativity. After all, professionals may be understandably wary of any technology that they think could, ultimately, make them obsolete. Specialized tools should look to augment creators, not replace them, and make those intentions clear if they want to attract customers in the first place. Then comes creative control. On that front, advanced users are likely to want to get deeper into the nuts and bolts of the technologies they use on a day-to-day basis so they can customize the output of their AI to fit their specific needs. Accordingly, and while automation is desirable, we think AI-enabled content creation should maintain human-in-the-loop capabilities , whereby a human operator can certify, and potentially course-correct, the adequacy of an AI’s work — for example, Promethan’s users can override a semantic connection between two assets if they consider it irrelevant. In turn, empowering users with more granular oversight, rather than one-size-fits-all output, should lead to greater trust, too. AI as a humanization tool. Every gamer has memories of a particular NPC that visibly lacked even the most basic social skills, movement coordination, or spatial awareness. Over the years, these moments have not only made for valuable memes but also served as cautionary tales for developers that any sign of unnatural behavior should be avoided at all costs in the characters they design. Fortunately, such shortcomings are growing scarcer every year. Recent developments in machine learning are enabling storytellers to generate increasingly convincing character behaviors , bringing new depth to how we interact with them inside our favorite virtual worlds. Now, AI-first companies aim to redefine what virtual beings can be. Why AI-enabled characters are coming to the forefront. As mentioned, the dream for smart, or smarter, characters isn’t exactly new. Developers and players alike have consistently done their best to avoid crossing the infamous “uncanny valley” — a hypothesized relationship between the human-like appearance of an object and our emotional response to it. Stemming from robotics, this concept describes the sense of unease or revulsion some observers can feel when facing humanoid robots that don’t feel quite human. It was then quickly extended to gaming, where it still acts as a guiding principle for designing and assessing character animation, dialogue, and behavior. Source: Wikipedia. That developers have historically aimed for more realism makes sense. Lifeless interactions not only hurt immersion, but indirectly impact the game experience: if you know the characters around you have only a few canned replies to offer as conversation, why would you spend time engaging with them at all? Lacking NPCs therefore lessen the player’s desire to explore and experiment, to the detriment of play. In recent months, AI-enabled characters have seen renewed interest across the industry. Technical feasibility aside, we believe a number of factors are contributing to this trend. Recurrence : Long and frequent play sessions in persistent worlds mean you’re likely to encounter some characters repeatedly. To make sure these interactions are always entertaining, informational, or both, with repeat potential, developers are looking to move beyond static dialogue trees and toward more adaptive conversational and behavioral systems. Scalability : As virtual worlds get bigger and more complex, they are being filled with hundreds to thousands of NPCs. While not all those characters will have the same significance, it’s important that they all offer players at least an acceptable level of interaction — call it a MVI, or Minimum Viable Interaction. While scripting so many dialogues and personalities manually would be impractical, AI can help scale up that process. Personalization : With ubiquitous recommendation algorithms, consumer apps from Netflix to Spotify provide a constantly upgraded consumption experience. As we continue to spend more time inside virtual worlds, these expectations will likely apply to their content and narratives, too. Developers looking to improve engagement will find value in AI-driven personalization, from dynamically balancing game difficulty to adapting a companion NPC’s playstyle to the player’s. Consistency : While the success of “low-fi” games and platforms like Minecraft and Roblox can’t be overstated, the overall trend with each new generation of gaming hardware is one of increased photorealism across all dimensions, including graphics, physics, and lighting. With the visual aspect of games getting more and more lifelike, it’s only logical that characters would act accordingly. It’s notable that this list features both player-centric (recurrence, personalization, consistency) and developer-centric drivers (scalability). Together, they’re fostering AI adoption and leading to exciting new use cases for interactive characters. Beyond smarts: personalities. While the focus to date has been on behavioral intelligence, the scope and potential of character realism have dramatically expanded: AI-first companies now aim to make their characters not just smarter, but more human too , with memories, personalities, and feelings of their own. This opens up opportunities for both AI infrastructure providers and content creators, with one enabling more of the other. A good example is Inworld AI , a developer platform for creating AI-powered virtual characters to populate immersive realities including games and VR/AR — as of last week, Inworld AI is a BITKRAFT portfolio company . The company’s technology covers all dimensions of an AI’s capabilities, including its perception, cognition, and behavior. This enables the resulting characters to fulfill a range of functions to “advise, guide, support, [and] entertain” users for consumer and enterprise use cases alike. Source: Inworld AI. As companies like Inworld AI start to handle the heavy lifting of designing and training AI characters, more creators will be able to leverage these technologies to create complex stories at scale. For example, beingAI is developing what it calls a “family” of AI characters all interconnected in Zbee World , an original transmedia story franchise. Importantly, AI-driven creative companies can take the long-term view, iterating on their protagonists based on audience feedback or serendipitous findings. Fable Studio ‘s Wizard Engine lets users generate an AI character’s life content on an ongoing basis, using a backstory and synopsis as initial input; from there, it’s able to generate content across voice, animation, text dialogue, and video. With the ability to refine and compound specific character traits over time, storytellers will be encouraged to adopt ever more fluid development workflows. Beyond interactions: relationships. Yet generating distinct, complex personalities is only one aspect of the bigger opportunity for AI beings. Though it makes for potentially richer interactions, it still relies on a one-to-many model whereby all users are essentially engaging with the same centrally-crafted AI brain. Instead, developers can now choose to go even more granular and enable their protagonists to adapt to players at the individual level for a fully personalized experience. For example, synthetic media startup Replika allows users to create their own “ compassionate AI friend ” that’s “always here to listen and talk.” As of October 2021, its app had 10 million registered users sending more than 100 million messages each week — each of them interacting with a truly unique companion. With time, we anticipate that consumers will begin to more closely bond with a select few virtual beings and expect them to travel, or “live,” across media, devices, and platforms. To prepare for this future, developers may want to consider as many sources of input as possible. For example, leveraging specific AI bricks enables Replika companions to respond to text, using both retrieval-based and generative dialogue models; voice, through speech recognition and synthesis; and vision, with face & person recognition and what’s called Visual Question Generation. By functioning equally in all contexts, AI companions will be able to prolong user interaction, and to become even more empathetic and customized as a result. Source: Replika. Our take. We at BITKRAFT believe the future of AI-enabled worldbuilding is rich with opportunities. On the one hand, AI creation tools are ushering a new era of creativity for the studios forward-looking enough to harness, rather than fear, them. With players’ appetite and expectations for virtual worlds only growing, we expect demand for this kind of software to surge in the coming years. Rather than compete solely on technology, the companies pioneering the space today should aim to build with consumers in mind from inception. On that front, we see intuitive interfaces, transparency, and granular control over the output as clear differentiators. On the other hand, recent breakthroughs in machine learning mean AI can be leveraged to make characters, and ultimately entire worlds, more intelligent and empathetic. As virtual beings become imbued with memories, personalities, and even feelings, the places they inhabit are set to grow ever richer and more complex. Rather than just “interact” with these AIs, consumers will inevitably form more durable and intimate relationships with them. We are excited to see AI-native companies continue to push back the frontiers of interactive entertainment. If you’re building in this space, we would love to hear from you ! Disclosures: The mention of any companies in this website post is for information purposes only and does not constitute an offer to sell or a solicitation of an offer to buy any interests in any of the companies listed, or any other securities. The information contained herein may include, or be based in part on, articles, information, and other data supplied by third parties, which has not been verified by BITKRAFT. This information should not be relied upon for the purpose of investing in any of these companies or for any other purpose.  Past investment results or performance of any sectors, industries, and/or companies listed should not be viewed as indicative of future performance.