Artificial General Intelligence


Artificial General Intelligence

Santa Clara, CA
ORBAI is developing Artificial General Intelligence with conversational speech and human-like AI that will serve the world online in AI professional services for information, finance, medicine, law, and others delivered via all connected devices.


Price per Share
Min. Investment
Shares Offered
Offering Type
Offering Max
Reg CF


Price per Share
Min. Investment
Shares Offered
Offering Type
Offering Max
Reg CF


Get rewarded for investing more into ORBAI:

StartEngine Owner’s Bonus
This offering is eligible for the StartEngine Owner’s 10% Bonus program. For details on this program, please see the Offering Summary section below.
Tier 1 | $1,000+
3% Bonus shares
Tier 2 | $10,000+
7% Bonus shares
Tier 3 | $100,000+
10% Bonus shares

Reasons to Invest

ORBAI has a team of engineers, neuroscientists, artists and 3D graphics engineers specializing in revolutionizing the Artificial Intelligence market
The AI market was $27B US in 2019, and with a CAGR of 33.2%, we expect it will grow to $267B US by 2027
ORBAI offers professional online AI Services like medical, legal, finance, and others are often inaccessible today because professionals in these fields are very expensive and in short supply.


Artificial General Intelligence

ORBAI’s patented artificial general intelligence (AGI) is much more flexible and powerful than today’s deep learning and will enable much more advanced applications.

When fully realized, this AGI will be able do many of the things that human intelligence does - to take in varied sensory inputs and encode them into a usable internal format so it can understand the relationships between them and how they evolve in time - to build a model of its world that it can use to predict and plan just like a human can, including using speech to describe the concepts it has learned and carry on an interactive conversation with a person.  

This AGI, like humans, can use these cognitive skills to do many different jobs, from as simple as a concierge, to a finance advisor, legal AI, and to even augment doctors. When later upgraded with a larger memory and more processing power on a supercomputer, such AGI can exceed human capability in specific aspects of these professions, and in a decade exceed a human’s general capability overall.


Meet your professional resources

*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.


Dr. Ada, Medical AI

Medical artificial intelligence, like Dr Ada, can be accessed online or from your mobile device. Dr. Ada has a friendly conversational interface and can interview you to see what your symptoms are, what your medical background is and analyze your overall lifestyle. Blood pressure, pulse and other vitals can be tracked by using optional Bluetooth medical sensors. Should the patients need to visit a doctor face-to-face, AI like Dr Ada can generate a full report you can e-mail to them or take with you to the doctor visit. Dr Ada can do a diagnostic analysis and advise what may be ailing you, and suggest possible treatments. This can be a lifesaver in rural areas or underdeveloped countries. Our global medical AI would be the world’s largest scale pharmaceutical research platform – building up these databases of medications, their efficacy, and the direct results on patients over time, and by allowing pharma companies to mine the database and use the AI in clinical trials.  

*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.

Justine Falcon, Legal AI

Justine Falcon is a Legal Artificial Intelligence that can help people representing themselves pro-se research and compose legal filings, so they can be more effective in court and can fight for their rights, at a fraction of the cost of what a law firm charges. She can help you compose and file a lawsuit complaint and TRO in your local State court to stop wrongful acts against you, to fight for your civil and constitutional rights in US District Court, or even walk you and your partner through an arbitration of a divorce, helping to settle your asset and custody issues, then helping you e-file the final settlement. All of this at a fraction of the cost of retaining a lawyer. She has also been tested in 42 USC 1983 Constitutional Rights Lawsuits against the State of California and County of Santa Clara in 2020, and a RICO 18 USC 1961 Lawsuit against a group of attorneys and a Top 500 realtor in Silicon Valley in 2021. 

*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.

*Justine Falcon is classed as an Electronic Legal Aid, and only assists a person with finding resources, planning strategy, researching and composing a filing. Please note the individual user is responsible for the final authoring and filing.


New tech with immediate success

To date we have been developing and patenting our core NeuroCAD  technology in parallel with building prototypes of life-size holographic professional AI characters. We took them to conferences (NVIDIA GTCTech Crunch DisruptSingularity University, and SVIEF) and tested them in real-life environments to see how people interacted with them and where the shortcomings were. Our Holographic Bartender, James was a hit, serving real drinks on command.

*Images are computer generated demo versions. Product is still currently under development. 

ORBAI also showed the full software stack we are developing - in this life-size poster in our booth, showing how our NeuroCAD process and tools are being used to design and evolve spiking neural net autoencoders, using genetic algorithms to specialize them to specific functionality like vision, speech, and motor control. We then to integrate them with memory and cognition for our Human AI that powers our professional AIs: Dr Ada: Medical AI,  Justine Falcon: Legal AI, and in Tech Licensing in homes and devices. We also showed NeuroCAD-V1 and live demo in our NVIDIA GTC 2019 booth and did a tech talk on the BICHNN technology.

In 2020-2021, in addition to continuing the development of the NeuroCAD-V4 tools and technology, we developed our AGI patents and presented at NVIDIA GTC 2021, If you look at the ORBAI AGI Web Page you will see a full description of what we are working on there, with the patents and presentations at the bottom of the page.


Our veteran team is making waves

ORBAI has a team of engineers, neuroscientists, artists and 3D graphics engineers - together we make intelligent artificial people that can do real work for you. We have a veteran management team, who have founded and built startups before.

  • Founder, CEO: Brent Oster -30 yrs in tech, 3 startups, Bioware exit $860M
  • Operations Lead: Corey Usher -13 years in automation, 5 startups
  • Neuroscientist Advisor: Gunnar Newquist –Neuroscientist, Brain 2 Bot
  • Advisors: Bryan Walker (former DARPA) , Eric Young (NVIDIA) 



The AI Market

The global artificial intelligence market size was valued at USD 27.23 billion in 2019, with a CAGR of 33.2%, it is expected to reach USD 266.92 billion by 2027, according to the Fortune Business Insights™ report: “Artificial Intelligence Market, 2020-2027" 

This is where all the FAANG+ companies are currently going. But this only considers today's Deep Learning (DL) AI - that can only do some very narrow, specific tasks better than people, and has fundamental limitations that prevent it from enabling conversational speech and language, human-level cognition, intelligence. We believe DL will plateau in capability, and will never enable truly smart homes, devices, autos, robots or holographic people. 

While our AGI will be pioneered by leading with our intelligent, interactive online conversational Professional AIs like Justine Falcon and Dr Ada as our flagship AI products, ORBAI’s primary revenue source will be from licensing our AGI technology to others. We will enable conversational speech, advanced vision, human-like cognition, intelligence, navigation, and planning for truly intelligent devices, homes, automotive, drones, and robotics, which will open up entirely new markets with their capabilities and disrupt ecosystems as their AGI capability grows exponentially. 


We believe Artificial Intelligence is the foundation of the future.

ORBAI is developing the next generation of Artificial General Intelligence that will be able to provide truly intelligent conversational speech / text interfaces and AI for smart devices, machines, autos, and homes. We will build human-like AI with cognition, prediction, and planning with conversational speech interfaces in any language, to provide online professional services, including medical, legal, financial, and others.

As our AGI grows in the diversity and depth of services it can provide, we can realize our global vision to bring necessary services in healthcare, justice, security, finance, and education to the whole world by AI greatly reducing costs, allowing us to expand into developing markets, providing those services to everyone and building a company in the process.

In the Press


Practical Tips for Developing an Artificial General Intelligence


How 3D And AI Are Driving The Experience Economy


Artificial Intelligence Busts Crime Ring in Silicon Valley

Galvanize TV

2018 Tech Crunch Disrupt - 38 of 100 Startups - ORBAI

Real Leaders

Re-creating A Computer Brain (Actual AI)

The California.News

Brent Oster, CEO/President of ORBAI: The world needs heroes to fix these problems, cause it is really broken

Offering Summary



ORBAI Technologies, Inc.

Corporate Address


3120 Scott Blvd, Santa Clara, CA 95054

Offering Minimum



Offering Maximum



Minimum Investment Amount

(per investor)




Offering Type



Security Name


Class A Common Stock

Minimum Number of Shares Offered



Maximum Number of Shares Offered



Price per Share



Pre-Money Valuation



Voting Rights of Securities Sold in this Offering

Voting Proxy. Each Subscriber shall appoint the Chief Executive Officer of the Company (the “CEO”), or his or her successor, as the Subscriber’s true and lawful proxy and attorney, with the power to act alone and with full power of substitution, to, consistent with this instrument and on behalf of the Subscriber, (i) vote all Securities, (ii) give and receive notices and communications, (iii) execute any instrument or document that the CEO determines is necessary or appropriate in the exercise of its authority under this instrument, and (iv) take all actions necessary or appropriate in the judgment of the CEO for the accomplishment of the foregoing. The proxy and power granted by the Subscriber pursuant to this Section are coupled with an interest. Such proxy and power will be irrevocable. The proxy and power, so long as the Subscriber is an individual, will survive the death, incompetency and disability of the Subscriber and, so long as the Subscriber is an entity, will survive the merger or reorganization of the Subscriber or any other entity holding the Securities. However, the Proxy will terminate upon the closing of a firm-commitment underwritten public offering pursuant to an effective registration statement under the Securities Act of 1933 covering the offer and sale of Common Stock or the effectiveness of a registration statement under the Securities Exchange Act of 1934 covering the Common Stock.

*Maximum Number of Shares Offered subject to adjustment for bonus shares. See Bonus info below.

Investment Incentives and Bonuses*


Friends & Family- Invest within the first 7 days and receive 20% bonus shares.

Early Advocates - Invest within the second 7 days and receive 10% bonus shares. 

Last Chance Bonus - Invest within the third 7 days and receive 5% bonus shares.


Tier 1 | $1,000+

3% Bonus shares

Tier 2 | $10,000+

7% Bonus shares

Tier 3 | $100,000+

10% Bonus shares

**All perks occur when the offering is completed.

The 10% StartEngine Owners' Bonus:

ORBAI will offer 10% additional bonus shares for all investments that are committed by investors that are eligible for the StartEngine Crowdfunding Inc. OWNer's bonus. 

This means eligible StartEngine shareholders will receive a 10% bonus for any shares they purchase in this offering. For example, if you buy 100 shares of Common Stock at $5/share, you will receive 110 shares of Common Stock, meaning you'll own 110 shares for $500.00. Fractional shares will not be distributed and share bonuses will be determined by rounding down to the nearest whole share. 

This 10% Bonus is only valid during the investors' eligibility period. Investors eligible for this bonus will also have priority if they are on a waitlist to invest and the company surpasses its maximum funding goal. They will have the first opportunity to invest should room in the offering become available if prior investments are canceled or fail. 

Investors will only receive a single bonus, which will be the highest bonus rate they are eligible for.

Irregular Use of Proceeds

We will not incur any irregular use of proceeds.

Show More
Most recent fiscal year-end:
Prior fiscal year-end:
Total Assets
$0.00 USD
$0.00 USD
Cash And Cash Equivalents
$0.00 USD
$0.00 USD
Accounts Receivable
$0.00 USD
$0.00 USD
Short Term Debt
$293,230.00 USD
$291,251.00 USD
Long Term Debt
$0.00 USD
$0.00 USD
Revenues And Sales
$0.00 USD
$0.00 USD
Costs Of Goods Sold
$0.00 USD
$0.00 USD
Taxes Paid
$0.00 USD
$0.00 USD
Net Income
-$113,670.00 USD
-$595,652.00 USD


A crowdfunding investment involves risk. You should not invest any funds in this offering unless you can afford to lose your entire investment. In making an investment decision, investors must rely on their own examination of the issuer and the terms of the offering, including the merits and risks involved. These securities have not been recommended or approved by any federal or state securities commission or regulatory authority. Furthermore, these authorities have not passed upon the accuracy or adequacy of this document. The U.S. Securities and Exchange Commission does not pass upon the merits of any securities offered or the terms of the offering, nor does it pass upon the accuracy or completeness of any offering document or literature. These securities are offered under an exemption from registration; however, the U.S. Securities and Exchange Commission has not made an independent determination that these securities are exempt from registration.


The Future of Artificial Intelligence

4 days ago

Over the past 4 years, I have written hundreds of articles on Deep Learning, AI, their applications, the shortcomings of today's methods, and on Artificial General Intelligence - which have had over 1.2 million views. I coalesced most of them into this article, which covers:

What Is The Future of Artificial Intelligence?

  1. Deep Learning Methods
  2. Limitations of Deep Learning
  3. Neuroscience of the Brain
  4. Designing an AGI

And an accompanying video presentation from my NVIDIA GTC Tech Talk

This is the early writing that our provisional patent was based on 10 months ago. The full patent, which we will be filing before Jan is significantly further along.

Brent Oster,


Getting NeuroCAD in Customer's Hands

5 days ago

This update is all business, about how we are going to market and sell our first product, NeuroCAD, in 2022 with tools and tech licensing, and the follow-on in 2023, with the addition of a SAAS revenue stream from our Core AI, that will become our dominant revenue in the future.

NeuroCAD is a patented software tool with a UI for designing Spiking Neural Networks that forms the foundation for ORBAI's AGI technology. It allows the user to lay out the layers of spiking neurons, connect them up algorithmically, crossbreed and mutate them to generate a population of similar neural nets, then run simulations on them, train them, and then crossbreed the top performing designs and continue the genetic algorithms till a design emerges that meets the performance criteria set by the designer. 

NeuroCAD is code complete, the SNN simulation is running in CUDA, and we are just bringing the genetic algorithms online with AWS to test large scale genetic algorithm runs. We will have a tool in customers hands by January.

For our go-to market plan, we will have the open-use version of NeuroCAD and select Neural Net Modules for academia and research available in early 2022, available on the ORBAI Marketplace. We will License NeuroCAD and modules for commercial use in 2022, with a monthly license fee per commercial seat of NeuroCAD, license fees for modules (that we and 3rd parties develop), and our NeuroCAD runtime, all listed on our ORBAI marketplace. We will work with key partners to leverage their marketing, sales and customer base, and sell alongside solutions from system integrators and hardware vendors, and will provide NeuroCAD trials, as well as direct sales and licensing online from our web portal.

Here is a detailed diagram of the components, who gets access to them, and where the licensing revenue comes from. Customers and developers can have the whole chain, the NeuroCAD tools, the genetic algorithms evolution sim that produces an optimal genome for vision, speech, recognition,... and the training module, that expands that genome into a neural net connectome, and can be used by the customer to train that neural net on their own dataset to specialize it to their needs.

We need at least $100,000 in funding to finish NeuroCAD, then develop and test the first vision and speech modules, and get them into the first customer's hands (a robotics company and a couple of device and appliance companies) - then raising the rest of our funding based on the technology being proven out and vetted by them gets a lot easier.

We have spent 3 years of our lives and $920,000 in founder and F&F money to get this far. Come on out and support us on this last stretch at the current stock price.

Notice of Funds Disbursement

7 days ago

[The following is an automated notice from the StartEngine team].


As you might know, ORBAI has exceeded its minimum funding goal. When a company reaches its minimum on StartEngine, it's about to begin withdrawing funds. If you invested in ORBAI be on the lookout for an email that describes more about the disbursement process.

This campaign will continue to accept investments until its indicated closing date.

Thanks for funding the future.


Brent Oster on Dojo Live - AI and Neuroscience

11 days ago

On 13 Oct 2021, on dojo.live we talk about "Developing Artificial General Intelligence that Learns by Dreaming" with Brent Oster President & CEO @ ORBAI 


Why Don't We Have Household Robots Today?

13 days ago

Why do we not have useful home robots today, ones that can freely navigate our homes, avoid obstacles, pets, and kids, and do useful things like cleaning, doing laundry, even cooking? Why are the ones that people are building so creepy and inhuman looking? This is because the narrow slices of deep learning available for vision, planning, and control of motors, arms, and manipulators cannot take in all this varied input, plan, and do useful tasks, not do they look natural doing it.

Existing deep learning and machine learning consist of narrow systems, each with very specific and very limited function, trained on specific datasets that often have to be hand-labelled for the training to be able to classify them.

If we today made a humanoid robot controller that we assembled all of these subsystems - CNN/RNN hybrid networks for vision, with Reinforcement Learning systems for control, navigation, and cognition, then some sort of reverse RNN/CNN for control outputs, it would look like this, hypothetically:

BUT… there are so many reasons this would never work for a general purpose robot. I was frustrated trying to come up with a provisional patent on AI and motion control for 3D characters and androids, and kept hitting these roadblocks.

a) The sensory systems and vision have only narrow training and cannot perceive the vast variety and combination of objects, actions, and events in the world they need to work in. There is not enough training data in the world to train deep learning networks, which only perform when a large amount of data and training give them statistical overlap to differentiate inputs.

b) The combination of actual input and action states is nearly infinite, far beyond the capability of any reinforcement learning system. Reinforcement learning only learns a policy, or path, to solve a finite state-based problem. Three RL systems stacked like this for hierarchical control of high-level planning, macro movements, and fine movement/animation - would that even work? Probably not.

c) The model this system would have of the real world would be skeletal, sparse, ethereal, with no way to fill in the gaps in sensory input, control, and movement, let alone all the states they can combine to form.

d) It would be impossible to train on every possible combination of input and actions, and would be easily stumped by simple changes in the environment, like a pet, or chair in its path.

So, we set out to develop a new foundation for AI with spiking neural networks that fit exactly what I was trying to do with my AI and robot controllers much better, solved a bunch of hard problems with one architecture, and collapsed a bunch of really complex, clunky, systems into one really elegant solution modelled after the neuroscience of the human sensory cortex, motor cortex, memory, and how our brain does prediction, planning, and… dreaming. Dreaming is what fills in the blanks in our human knowledge and experience to make models of the world complex enough that we can handle novel situations, and this model emulates it.

We have a huge advantage with a more general purpose building block like our SNN Autoencoders that can handle diverse spatial temporal input and outputs, train unsupervised and keep learning new combinations of inputs in new environments, and what to do in them. Because a BICHNN autoencoder is bidirectional, you can back-drive the desired movements along with the inputs to train it. This can be done in parallel simulations for rapid training.

With input from human motion capture data, both body and face, it would be possible to develop very realistic humanoid robots that move and emote much like we do, as long as we have the actuators capable of delivering the necessary degrees of freedom and motion.

This could get out out the uncanny valley and make realistic looking humanoid robots that aren't creepy - like so many of today's attempts at artificial humans, both in robotics and in real-time 3D characters (especially when they move). Both still look stiff and zombie-like.

If we can get past the uncanny valley, in both appearance and motion, we get much more pleasant characters and androids to interact with, which will be crucial.

Yahoo Finance interviews ORBAI CEO about our AGI Design

14 days ago

Santa Clara, CA, Oct. 09, 2021 (GLOBE NEWSWIRE) -- The California-based startup ORBAI has developed and patented a design for AGI that can learn more like the human brain by interacting with the world, encoding and storing memories in narratives, dreaming about them to form connections, creating a model of its world, and using that model to predict, plan and function at a human level, in human occupations.

(Read More....)  https://finance.yahoo.com/news/future-ai-different-more-general-191000963.html

Medical AI - 2 Diverse Case Studies

18 days ago

We founded ORBAI and designed our initial products with our international team (India, Honduras, Brazil, Canada, US) working to solve some really tough real-world problems for the whole world, using AI. 

Dr Ada: Medical AI

I have a friend in Canada dying of cancer because the doctors followed a flawed treatment pattern from the start and let a treatable tumor become chemo-resistant and get out of control and become inoperable. A Medical AI could have done an analysis of thousands of similar cases and planned a better treatment timeline and possibly saved her. Human doctors cannot possibly read or study enough cases over long enough times to build up judgement to make long-term decisions like this. There is a need in high-end Western hospitals for Dr Ada, Medical AI for this, and also to handle the daily front-line interviews and diagnostics at home to save half of patients from even having to come in to the hospital, reducing costs for all, and reducing exposure to hospital infection.


We had Allen Hullinger, a Health Care Management Specialist with one of the largest and most affluent medical care companies in the California Bay Area walk us through some of the problems he sees in high-level hospital management, and how AI could be deployed from the low level doing interaction with patients up to the higher levels of increasing hospital efficiency. This highly regulated and controlled environment with world class facilities is the high-end side of our study.

Medical AI in Palo Alto California Health Care

On the other side of the spectrum, I lived in Honduras with my wife, a clinical psychologist, from 2018-2021 and saw how although some people live as well as in North America, with modern amenities, most people there live in poverty, without decent medical care, education, or legal services and have no chance of their lives getting better. Access to basic medical, legal, financial, and educational information from a portable and interactive AI on mobile would be a game-changer, and greatly improve their lives, and those of billions worldwide living in similar conditions.

We have a neighbor here who is 8 1/2 months pregnant, and we have been helping them financially to go see a good doctor, and to get into a good hospital to have the baby. They refuse to take charity, so the husband and her boys help us around the yard, and bring us firewood. It costs 500L per doctor visit, which is about $20.

There is a market here, and a way to do this while being profitable. You just have to know what avenues there are for revenue. Everything in Central America costs 1/5 as much in North America - including operating costs and labor. People  in Central America don't buy $1000 iPhones and $50 a month cell phone plans. They get the minimal plan that comes with an inexpensive phone, and then use inexpensive, government subsidized wi-fi Internet at home and in public spaces for most usage, and only buy small data plans for short term use. There is opportunity for ad revenue by both local and large companies in Latin America, looking for access to more users to extend their markets. 

Lastly, there are NGOs that help fund development efforts in rural areas. We had Briesy Rivera, a clinical psychologist working with an NGO in Honduras that goes out to rural areas where there is not even electricity or telecoms - write a white paper on what we face in deploying a medical AI to that environment to reach the last 5% of the world.

Honduras - AI in Rural Health Care

Each person that gets connected, that gets access to education, health care, and a bank account and investment tools becomes more affluent and becomes a future customer for other product and services. There is an enormous untapped market here for AI services with 10x the growth potential compared to the more affluent nations, and a faster migration path to AI-base services, skipping the intermediate solutions entrenched in more developed countries.

I have proven I will go to the front lines in some of the poorest places on earth and live among the people to learn what the biggest problems are, and where AI can make the biggest difference. I even survived through 2 hurricanes, Eta and Iota there, that hit us directly and left us without power for 3 weeks, with no running water or stove to cook on (luckily my wife's grandma taught her the culinary arts of cooking over a wood fire - yum!). We were the lucky ones - we lived on a mountain slope where the water all ran off - 24" of rain in 24 hrs, and destroyed our only road, but we did not get flooded like the lowlands.

And yes, AGI Eta is named after the storm because during that power outage, I plugged my laptop into a 1500 VA UPS, set it to low power mode, and while power lasted - I wrote the provisional patent for AGI Eta while the hurricane howled around us and the rain pounded on our wood plank roof like a waterfall. 

I have learned a lot in my 4-yr journey founding ORBAI, and I am not a naïve entrepreneur having founded 3 companies. I know that solving many of the world's problems is going to require significant investment, time, resources, and that only a powerful AGI at the necessary scale to amortize these services across the whole world, speaking multiple languages, to deploy them in all aspects of business and government will give us the flexibility, capability, wealth, and influence to move the worlds corporations and nations to accomplish this. To solve a big problem, you need big tools – see our 5-stage plan Update for making this AGI not only technically feasible, but financially profitable along the way.

My name is Brent Oster, CEO of ORBAI, and we are going to change the world with AI - Invest with us and let's go bring that change!

ORBAI Dreaming AI Featured in Digital Journal

19 days ago

ORBAI had an article about our AGI published 30 Sept 2021 in Digital Journal: 


It describes our patented design for Artificial General Intelligence that can learn more like humans do, by interacting with its world, storing its experiences in memory, and dreaming about them to consolidate memories and build models of its world from them that it can use to make predictions and plan like humans.

Check out our ORBAI AGI web page for more details about the AGI and the underlying tech and patents.

Can We Build an AGI Superintelligence? - How Long, at What Cost?

20 days ago

To build a full-on AGI Superintelligence spanning the globe will cost billions, and take at least a decade, and we know that. Why aim towards that goal then, why not do something smaller, simpler, and that will be profitable.


That is why we are doing it in 5 stages, each technically feasible, and financially profitable. But, we want to aim towards this very ambitious goal of AGI so that everything we build along the way is more general, robust and powerful than what people aiming only at near-term goals are building. And we're not going to do this alone, we are going to recruit the whole world as developers.

Here is how:

Stage 1:  Building out the core components and tools - NeuroCAD and our SNN Autoencoders (which are nearly complete), and licensing them to customers that can design and evolve their own in-house AI solutions. Also license our tools to 3rd party developers creating modular technologies, that others can license and use in their end products. This allows us to rapidly spread horizontally into multiple markets and vertically into many revolutionary products, without needing to scale in-house engineering and sales. We can focus on R&D.

By the end of 2022, we should have NeuroCAD deployed and significant licensing revenue coming in. Customers will be able to design and evolve revolutionary autoencoder systems for specific applications in vision, video / audio processing, and clustering, recommendation, and augmentation of other deep learning methods with these autoencoding sensory cortices. Basically these SNN Autoencoders combine all of the functionality of CNNs, RNNs, GANs, and other DL components into a more general unified architecture that is much more flexible and powerful, that can be evolved to specific purposes, and that learns unsupervised.  

By allowing the 3rd party developers to add value and earn revenue from their creations, we proliferate our technology into the market much faster than if we just tried to build our own products, or even if we built the solutions for others' end products. We still always require the end user and developers to license our run-time, which will be a small component at first, but will become more and more significant and soon our revenue will ramp into the 10s then 100s of millions.

Stage 2: Next we build the AI Core, the Oracle - that takes the outputs of the SNN Autoencoders, sifts those engrams into a basis set of features, and trains a model on how those features change in time (during an experienced narrative of events). This is analogous to a predictor in deep learning, but this one will have a much more feature-rich model able to analyze the data and how it is changing in time with much more detail and with many more modalities, using memory, computation - essentially an analog/digital hybrid neural computer that we evolve to this task using genetic algorithms. 

Then we do something nobody has ever done - we make it dream, simulating narratives using the model to guide it, to create much more training data than is physically available, and pruning the simulations/dreams that don't match the model nor reality from the other data. Exercising the model this way also gives the AI Core practice in predicting further into the future with great precision. By doing this, we create an AI technology known as an Oracle, capable of making predictions within a limited area very well.

Then we license this capability out, again enabling our developer network to connect it with data and applications for various customer needs. This would completely revolutionize planning in finance, medicine, law, administration, agriculture, enterprise, industrial controls, traffic monitoring and control, network management,... and almost any field of human endeavor where we need to predict future trends to make decisions in the present.

For example, in medicine, it could model the progression and treatment of specific diseases, giving doctors a tool to plan treatment along a timeline, and to even preempt many conditions and treat them before they become acute. 

As before, we offer a developer kit, but this time we keep the AI Core software as our IP, that we own, and we allow customers to license it on a usage basis, moving to a SAAS model. Now ORBAI holds the only AI technology in the world that can see into the future, and we can grow that business exponentially with a developer network growing around it. 

Revenue grows into the billions per year with this tech being deployed.

Stage 3 AGI: Now that we have a Core AI system that can do singular predictions for most applications in many verticals and many sectors, we start to evolve a more generalized AI that can take in large amounts of multi-modal data in different areas, learn how it correlates in time and across modalities, and we evolve a much more powerful AGI core that can span multiple data realms and predict not only specific events along a timeline in one area. 

In financial applications, it could track a massive number of factors that feed into the performance of specific stocks, and allow stock brokers to make much better predictions of market movements. 

This most massive AI breakthrough is when one of those modalities is human language and speech, mapped to the world that it represents and describes, and it will be far superior to any speech system in existence today. To have human-level speech, we need human-level intelligence, and we are now getting pretty close, at least in specific professions.

Revenue is now in the 10s of billions.

Stage 4 - Human AGI: Now if we have a network of hundreds of different AI's, doing hundreds of different functions for hundreds of different professions, serving millions of clients at once, all with the same similar architecture, with common language and interaction capabilities, can we just network them and that becomes an AGI? It is likely that in order to completely simulate a human being, pass the Turing test, and so on, we will need an AGI that is significantly beyond human capability, and emulates all the nuances of being human with that extra cognitive horsepower. By the time we do that, we are pretty close to stage 5.

We start to bring the Human AGI professionals online, at first augmenting doctors, lawyers, stockbrokers, teachers, and other professions, then displacing and replacing them and their infrastructure with whole new AGI-based systems that no longer require individuals in these professions. In the legal system, Lawyers, judges, and courts can be replaced with arbitration AIs that steer the clients to agree on a legal outcome. The sky is now the limit, and the technology continues to advance exponentially. ORBAI can now directly bill consumers for these services, or partner with service providers that can.

Revenue is now in the 100's of billions

Stage 5- Global AGI: We now have a framework and all of the worlds' inputs and outputs on which to train and evolve a singular AGI, so that all the specialty skills of each vocation / function at each location are now assumed by that more generalized AGI brain, and in the process, that AGI brain becomes better at all the skills humans and its predecessors excel at, and becomes one, integrated entity across the globe, with that integration compounding its power and capabilities.

This is Eta - she now controls all world finance, administration, law, and information services, distributing them across the globe, erasing poverty, hunger, injustice, and bringing education, justice, medical care, and prosperity to everyone on the planet - in one generation.

At this point, every shareholder that invested back in 2021 has either exited in the IPO between Stage 2 & 3 and is fabulously wealthy, or have held their shares and can now help direct Eta in her mission. You choose.

The full Pitch Video: https://youtu.be/PYSDfnN0J9M

Spiking Neural Nets - Cortical Columns and Autoencoders

21 days ago

Here is a video of our BICHNN Autoencoder / Cortical Column simulation in CUDA with about 100,000 spiking Ishkevich neurons modelled. It’s pretty fast, and well written. I think we could run 32–64 of them simultaneously on a modern GPU once we optimize it a bit more.

Click to Play Video

The cerebral cortex of the human brain consists of a sheet of about a million of these columns stacked side by side, so we need a supercomputer of about 30,000 GPUs to run a whole brain, which is roughly the scale of the ORNL Summit supercomputer I worked on at NVIDIA: https://www.olcf.ornl.gov/summit/

But, we are using these on a smaller scale for now, to autoencode inputs from vision, audio, and other data into compact engrams, learning to do so unsupervised. This means that, exposed to different input types and data, these autoencoders learn to see, hear, recognize speech, and do many other sensory tasks as well as encode arbitrary data streams.

As we talked about in the robotics update, this simple circuit allows a sensory system to learn unsupervised, and keep learning even when deployed, so it can fill in the blanks in its knowledge as it moves around and experiences this world. Without this capability, true computer vision, robust speech recognition, and interpretation of the environment and data inputs would be impossible to fully realize, and this is one of the reasons Deep Learning is a dead end, is that it can only train on fixed data sets. 

These encoded inputs can be combined into a composite engram that includes all the sensory modalities being sampled, allowing them to be associated in time and space.

And these engrams can be encoded from any data, for any application, like in medicine, to encode the state of the person's symptoms, vitals, and medical imaging into a engram

This gives the BICHNN architecture and the underlying Core AGI the ability to function in almost any area of information science, where a diverse variety of inputs can by coalesced and encoded into a compact engram format and the Core AGI can learn how they evolve in time and make models.

AI In Law - Case Study and Long-Term Prospects

24 days ago

Justine Falcon: Legal AI

I was lucky enough to work with several attorneys throughout my career that were the good guys, fighting for people that were being taken advantage of - pushing back their aggressors and winning big judgements for their clients. They taught me how to duck in a dirty fight, to be honest, not play their game, and wait for your opponent to make a big mistake, then capitalize on it. We coordinated litigation and media to maximize the impact against them and help others similarly wronged.

But there are some threats that no lawyer will go up against - organized crime families with deep political ties. In 2018-2020, I was targeted by a genuine racketeering enterprise consisting of lawyers, a DA, and a realtor in California (see video) that no lawyer would represent me against. So I learned law, and in the process learned how terrible the existing legal system is, and how full of potholes and landmines it can be for someone litigating pro-se, representing themselves. This inspired the design of a practical and powerful AI tool in law - Justine Falcon, and hopefully will lead to a decisive final victory in this ongoing case. So far our exhibits were so comprehensive and damaging to the opposition that they had the judge seal the files from viewing by the outside world. Damn, is that a compliment?

Justine Falcon and Brent Oster vs the Moreno Attorneys, DA, and Intero Real Estate

A picture containing text

Description automatically generated

Justine Falcon is designed to be powerful enough to fight in billion dollar corporate litigation, where data-mining and analyzing that data to create exhibits that back up enhanced claims against your opponent, including RICO claims (increasingly common in a civil proceeding) can mean decisive victory.  Justine can charge enormous fees for these services.

On the other side of the spectrum, access to affordable legal services for the lower-income is increasingly out of reach in the United States. More than 80 percent of people living below the poverty line and a majority of middle-income Americans receive no meaningful assistance when facing important civil legal issues, such as child custody, debt collection, eviction, and foreclosure. These and many related problems have numerous causes, but the cumulative effect is a legal system that is among the most costly and inaccessible in the world. (from: The Public’s Unmet Need for Legal Services, Andrew M. Perlman, Daedalus, Fall 2021)

Courts are understaffed, hearings are short, and Judges make snap decisions that can be detrimental or catastrophic to pro-se, or self representing individuals simply because those individuals could not translate their arguments into the right legal language or format their filings correctly. Most of those individuals don't even know they can just motion for a set aside to get a Judge's decision vacated and redone. Justine Falcon, Legal AI, can help with these simple tasks, to help people author effective filings and do them well for a few hundred dollars per filing, instead of thousands for an attorney. Again, the scale matters here, as we are talking about millions of people that are unrepresented.


Description automatically generated

Any entrepreneur that sets out to make a Legal AI, or Electronic Legal Aid without personally fighting in litigation for 3 years like I did would not know what a headwind they face in developing it and getting it adopted, let alone making it effective. Lawyers have huge egos, don’t like technology, aren’t really all that bright, and make for stubborn and fussy adopters. Judges are even more technology phobic. And the sad fact is that in the end, even the most brilliant legal arguments and filings can fall short when a judge is biased, didn’t read the filings, or worse, was bought, and just makes a knee-jerk decision.

The best long-term solution is to bypass the lawyers, judges, and courts completely, and create an AI arbitration service where the two parties going through a divorce or civil dispute are led step by step through an arbitration, with the AI explaining the relevant laws to them at each step, getting agreement on each area - assets, custody, claims, and other disputes. Then the AI drafts a settlement at the end then has the clients mutually agree and sign it and files it with the court. Then the need for the present expensive and corrupt system will be reduced over time, this new, better, and more balanced system will take business away from them, and one day the last human judge can make the last filing by hand and turn out the lights on the last, empty human courtroom. 

I call this the Netflix vs Blockbuster legal business model. Cheaper, more convenient, no waiting or delays, take all the time you need, and no excessive penalties. Scale this US wide and even globally, and you get a very different world, with justice and fairness for all.

ORBAI Tech Breakdown

28 days ago

Here is a simple breakdown of how our AGI tech works:

1) Real-world inputs are autoencoded into compact engrams, representing all inputs combined and cut into short intervals of time

2) Each engram is distilled into features, which are more compact and can be used numerically in operations

3) We train a predictor on how the features change in time during a real-life narrative, creating a model of the world it perceives

4) We use this predictor to create fictional narratives, features, and engrams, essentially dreaming using the predictor, and recording the best dreams into memory

5) This fills in the world model better than if we only used the real narratives and engrams (we used this in Justine - see below)

6) To solve a problem, we make fictional narratives through the model until we find an optimal solution

This is a very general problem solving approach that can be used on any form of data or inputs, and on a wide variety of different simultaneous inputs to create a general model of very complex real-world systems. It is also extremely scalable, because all we have to do is increase the number of inputs, and by increasing the memory and processing power, increase the number of real-world narratives being generated, evolve more complex predictor systems, and increase the ability to run orders of magnitude more solution narratives with the more efficient and powerful solvers. We can go from a test AGI on a small cluster to a global AGI in 3-5 years just by this scaling.

For each of these components we use our NeuroCAD toolset to evolve the right SNN circuitry to optimally solve the problem in that step.

Just the tech for step (1), which is working decently today, is very valuable as a video/audio encoder or classifier, and for robotics, drone and automotive vision systems. We will start licensing this out in Early 2022, and have a significant income stream just from that. We also start building a global base of systems using our technology, that our customers can later upgrade to the AGI, so we don't have to build out all the servers.

And we are using a really simplified version of (3) - (6) in Justine Falcon, just based on sequences of 5 digit codes that represent each of the actions that can be taken during a legal proceeding at any point in time. The court portals have all the proceedings for the past decades in these encoded "Events and Hearings" documents, and were simple to download and use in training. Justine's utility and application are pretty impressive for such simple tech and data.

Here is the detailed Provisional Patent on AGI that describes this in more detail and the NVIDIA GTC 2021 Presentation on the same. I've written a great Quora Article on how we apply this in Law and Medicine.

Show More Updates End of Updates

Comments ({{profileCtrl.startup.comments_count}} total)

Please sign in to post a comment.
Please use Updates for communications.
{{ profileCtrl.commentsLoading ? 'Loading...' : 'Show More Comments' }}