The ORBAI offering is now closed and is no longer accepting investments.

ORBAI

Artificial General Intelligence

ORBAI

Artificial General Intelligence

Santa Clara, CA
Technology
ORBAI is developing Artificial General Intelligence with conversational speech and human-like AI that will serve the world online in AI professional services for information, finance, medicine, law, and others delivered via all connected devices.

$88,141

raised
93
Investors
$14.9M
Valuation
$5.00
Price per Share
$250.00
Min. Investment
Common
Shares Offered
Equity
Offering Type
$1M
Offering Max
0
Days Left

$88,141

raised
93
Investors
$14.9M
Valuation
$5.00
Price per Share
$250.00
Min. Investment
Common
Shares Offered
Equity
Offering Type
$1M
Offering Max
0
Days Left
This offering ended on May 01, 2022 and is no longer accepting investments.

Reasons to Invest

  • ORBAI’s AGI technology will enable truly smart devices, homes, autonomous vehicles, and online services.


  • AGI capabilities include conversational speech, with superhuman prediction, planning, and cognition enabling intelligent services and apps via SAAS


  • The AI market was $27 billion US in 2019, and with a CAGR of 33.2%, we expect it would grow to $267 billion US by 2027 


  • ORBAI has 2 patents pending in artificial general intelligence, for the underlying tools and technology, and the AGI architecture.


  • ORBAI has a senior team with decades of experience in software engineering, scientific computing, deep learning, neuroscience, and entrepreneurship, founding 7 companies between our founders.


  • The founder and CEO of ORBAI, Brent Oster, has founded 3 companies, including the successful video game company Bioware (co-founded at age 24), which had an $860 million exit in 2007. (Source)





WHAT ORBAI DOES


Artificial General Intelligence


ORBAI is developing the next generation of Artificial General Intelligence, that will provide a universal platform to go far beyond the capabilities of Deep Learning based AI in platforms and applications today.


It is modeled after the human brain and will be built to have human-like levels of cognition, prediction, planning and conversational speech interfaces. We will build it to converse and translate fluently in any language making it universally available, on any platform, hosting advanced online services from concierge and customer service to medicine and law. It will enable control and interaction for 3D characters and general purpose robotics for many general applications.  


ORBAI will provide the AGI platform upon which the services and applications are developed, and provide an API to the platform developers for mobile, smart appliances, home automation, autonomous vehicles and robotics to integrate with the AGI via SAAS. 


We plan to build a small set of applications for a subset of the platforms as a start, while cultivating an ecosystem of 3rd party developers to build applications and provide services through our platform as well as developers to build out front-ends for the various platforms. We expect that this approach will allow us to grow our markets quickly, laterally and vertically, as our AGI expands exponentially in capabilities as well.


WHAT DOES AGI ENABLE?


AGI In Multiple Verticals

ORBAI AGI In Medicine

For a near-term application in medicine, the ORBAI AGI could enable initial intake of a patient via conversational speech and taking vital stats with bluetooth sensors, creating a concise e-mail to their practitioner, saving hospital staff time, and increasing efficiency for medical services.


For a long-term application in medicine, the ORBAI AGI could model the alternating diagnostics and treatment progression of specific diseases, giving doctors a tool to plan treatment along a timeline, and to even preempt many conditions and treat them before they become acute. An AGI doctor could be utilized through a mobile device, with full language, diagnostic, and even treatment suggestion capabilities. See our example of Dr Ada Medical AI.

 

*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.


ORBAI AGI In Finance & Enterprise

In financial applications, ORBAI AGI could track a massive number of factors that feed into the performance of specific companies, allowing decision makers to have much better predictions of market movements.


For Enterprise and Government Agencies, the AGI could provide enhanced analytics, for forecasting, and decision making in not only finance, but for the sales, marketing, product development, legal, and even HR teams, so they can forecast 4-6 months ahead, run simulations of multiple scenarios forward in time, and make the best decisions.


*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.


ORBAI AGI In Law

In Law, AGI would provide legal research, composition, and filing tools for pro-se litigants and small law firms, making them able to litigate more effectively, and give large law firms tools to deeply research cases, come up with AI - composed exhibits, and plan complex strategies for litigation.


*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.


ORBAI AGI In General Application

When fully realized, this AGI will be able do many of these applications and more with near-human intelligence - able to take in varied inputs, store and encode them efficiently and able to understand the relationships between different data inputs while continually evolving and improving its understanding over time - to build a model of its world that can be used to predict and plan just like a human and then go beyond us, all the time conversing with us in our language, with human speech and text.

BUSINESS MODEL


AGI Core SAAS + API for Front End


The global artificial intelligence market size was valued at USD 27.23 billion in 2019, with a CAGR of 33.2%, it is expected to reach USD 266.92 billion by 2027, according to the Fortune Business Insights report1: “Artificial Intelligence Market, 2020-2027


We provide access to the AGI via a Software As A Service model, and license the development tools to our customers and an ecosystem of 3rd party developers that work with them, to facilitate connection of the customer’s front end to the AGI, and integration with our developer’s back-end AGI-enabled services. 


*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.


Working with customers and 3rd party developers, we will deploy and build our technology underneath these vertical applications on the customer-facing front end devices and online services, and on the back end for the content and applications of our AGI, we can achieve the largest and fastest market penetration. By 2028, our goal is for every tech product and service to have ORBAI AGI Inside.


*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.


We are partnering with hardware vendors and system integrators to bring our combined solutions to their customers, leveraging their sales and marketing resources.

DIFFERENTIATION FROM COMPETITION


ORBAI AGI: More General & Powerful 


Today’s current generation of AI, based on Deep Learning, is narrowly focused and only able to perform very specific tasks.  To train DL-bases neural networks, it requires a large amount of training data, with lengthy testing and validation time needed before these networks are usable.  It is incapable of imagining new information, and thus cannot produce abstract constructs from conception to imagination. These fundamental limits also place severe limitations on the applications for which an AI can be used in real world professions.  Current AI might be able to recognize objects in images and videos, but it cannot infer true meaning other than what it has been trained.  It cannot create a story with real context or meaning, or solve any problems that require some creative thinking.

  

The AGI from ORBAI aims to have none of these limitations, as it operates on a fundamentally different design and technology, making use of evolving spiking neural nets as components in an AGI.


This architecture can transform any real-world inputs into hierarchical abstract data, operate on it with advanced SNN solvers, predictors, and dreamers, then transform back to reality, effectively operating on that overlying reality the same way humans do, using language as the description and abstraction of that reality.


*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.


This AGI will be able to understand business analytics, market movements, and behavior of people and groups, perform forecasting, and have high-level decision-making capabilities with a vast data reach in time and space and an ability to project and plan years into the future. 

  

This can revolutionize planning in finance, medicine, law, administration, agriculture, enterprise, industrial controls, traffic monitoring and control, network management, and almost any field of human endeavor where accurate projection of future trends to guide decisions can make an impact. In other words, everything.


Check out the full specs and out patents on our AGI Web Page.

ABOUT THE FOUNDER


Who and What?


The founder and CEO of ORBAI, Brent Oster, co-founded at the age of 24 the successful video game company Bioware, which was acquired by VG Holdings. VG Holdings and its subsidiaries (including Bioware) were acquired for $860 million in 2007. (Source) At NVIDIA, he was a solution architect for 10 years, partnering with companies to leverage NVIDIA acceleration in high performance computing and deep learning.  


His passion for AGI has driven the development and patenting of the NeuroCAD toolset (now code complete), and of the ORBAI AGI design that is now moving from R&D to patent to development and application, and has him ranked in the top 5 authors on AGI on Quora.


To realize this vision, ORBAI is building a focused, brilliant team, and giving them incremental, achievable technological stages and profitable business goals.

INVEST IN ORBAI TODAY



For $250, you could own a part of a company that is developing what we believe to be the most advanced artificial intelligence on the planet today, using tech and a business model that we believe can both expand exponentially to overtake the biggest players in the market in 5-10 years.


Our only limitation is funding, and that is gating how fast we can grow. We need to raise $1M to fully fund a development team to build a revolutionary speech demo using our architecture. The tools and foundation are all in place.


After that, we believe that when we demo to customers, partners and investors like yourself, there will no longer doubt that we can do what we claim now.



*All products are in varying stages of development. Some products are in use and some are currently undergoing updates.

Font Family

In the Press

AiThority:

Practical Tips for Developing an Artificial General Intelligence

Forbes

How 3D And AI Are Driving The Experience Economy

Bloomberg

Artificial Intelligence Busts Crime Ring in Silicon Valley

Galvanize TV

2018 Tech Crunch Disrupt - 38 of 100 Startups - ORBAI

Real Leaders

Re-creating A Computer Brain (Actual AI)

The California.News

Brent Oster, CEO/President of ORBAI: The world needs heroes to fix these problems, cause it is really broken

Offering Summary


Company

:

ORBAI Technologies, Inc.

Corporate Address

:

3120 Scott Blvd, Santa Clara, CA 95054

Offering Minimum

:

$10,000.00

Offering Maximum

:

$1,000,000.00

Minimum Investment Amount

(per investor)

:

$250.00











Terms


Offering Type

:

Equity

Security Name

:

Class A Common Stock

Minimum Number of Shares Offered

:

2,000

Maximum Number of Shares Offered

:

200,000

Price per Share

:

$5.00

Pre-Money Valuation

:

$14,922,500.00











Voting Rights of Securities Sold in this Offering

Voting Proxy. Each Subscriber shall appoint the Chief Executive Officer of the Company (the “CEO”), or his or her successor, as the Subscriber’s true and lawful proxy and attorney, with the power to act alone and with full power of substitution, to, consistent with this instrument and on behalf of the Subscriber, (i) vote all Securities, (ii) give and receive notices and communications, (iii) execute any instrument or document that the CEO determines is necessary or appropriate in the exercise of its authority under this instrument, and (iv) take all actions necessary or appropriate in the judgment of the CEO for the accomplishment of the foregoing. The proxy and power granted by the Subscriber pursuant to this Section are coupled with an interest. Such proxy and power will be irrevocable. The proxy and power, so long as the Subscriber is an individual, will survive the death, incompetency and disability of the Subscriber and, so long as the Subscriber is an entity, will survive the merger or reorganization of the Subscriber or any other entity holding the Securities. However, the Proxy will terminate upon the closing of a firm-commitment underwritten public offering pursuant to an effective registration statement under the Securities Act of 1933 covering the offer and sale of Common Stock or the effectiveness of a registration statement under the Securities Exchange Act of 1934 covering the Common Stock.

*Maximum Number of Shares Offered subject to adjustment for bonus shares. See Bonus info below.

Investment Incentives and Bonuses*

Time-Based:

Friends & Family- Invest within the first 7 days and receive 20% bonus shares.

Early Advocates - Invest within the second 7 days and receive 10% bonus shares. 

Last Chance Bonus - Invest within the third 7 days and receive 5% bonus shares.

Amount-Based:

Tier 1 | $1,000+

3% Bonus shares

Tier 2 | $10,000+

7% Bonus shares

Tier 3 | $100,000+

10% Bonus shares

**All perks occur when the offering is completed.


The 10% StartEngine Owners' Bonus:

ORBAI will offer 10% additional bonus shares for all investments that are committed by investors that are eligible for the StartEngine Crowdfunding Inc. OWNer's bonus. 

This means eligible StartEngine shareholders will receive a 10% bonus for any shares they purchase in this offering. For example, if you buy 100 shares of Common Stock at $5/share, you will receive 110 shares of Common Stock, meaning you'll own 110 shares for $500.00. Fractional shares will not be distributed and share bonuses will be determined by rounding down to the nearest whole share. 

This 10% Bonus is only valid during the investors' eligibility period. Investors eligible for this bonus will also have priority if they are on a waitlist to invest and the company surpasses its maximum funding goal. They will have the first opportunity to invest should room in the offering become available if prior investments are canceled or fail. 

Investors will only receive a single bonus, which will be the highest bonus rate they are eligible for.

Irregular Use of Proceeds

We will not incur any irregular use of proceeds.

Show More
Most recent fiscal year-end:
Prior fiscal year-end:
Total Assets
$0.00 USD
$0.00 USD
Cash And Cash Equivalents
$0.00 USD
$0.00 USD
Accounts Receivable
$0.00 USD
$0.00 USD
Short Term Debt
$293,230.00 USD
$291,251.00 USD
Long Term Debt
$0.00 USD
$0.00 USD
Revenues And Sales
$0.00 USD
$0.00 USD
Costs Of Goods Sold
$0.00 USD
$0.00 USD
Taxes Paid
$0.00 USD
$0.00 USD
Net Income
-$113,670.00 USD
-$595,652.00 USD

Risks

A crowdfunding investment involves risk. You should not invest any funds in this offering unless you can afford to lose your entire investment. In making an investment decision, investors must rely on their own examination of the issuer and the terms of the offering, including the merits and risks involved. These securities have not been recommended or approved by any federal or state securities commission or regulatory authority. Furthermore, these authorities have not passed upon the accuracy or adequacy of this document. The U.S. Securities and Exchange Commission does not pass upon the merits of any securities offered or the terms of the offering, nor does it pass upon the accuracy or completeness of any offering document or literature. These securities are offered under an exemption from registration; however, the U.S. Securities and Exchange Commission has not made an independent determination that these securities are exempt from registration.


Updates

Last Day to Invest!

about 2 months ago

Come invest in ORBAI's RegCF Round before it closes on April 30th!

According to April 2022 “Fortune Business Insights Report, Artificial Intelligence Market”, the global Artificial Intelligence (AI) market size was valued at USD $328.34 billion in 2021. The market is projected to grow from USD $387.45 billion in 2022 to USD $1,394.30 billion by 2029, exhibiting a CAGR of 20.1% during the forecast period.

We are offering a chance to get in on the ground floor of the next, 3rd generation of artificial intelligence, and all we need is $200,000 total this round to build the core NeuroCAD technology MVP by Sept, publish our  Book on AGI, land customers and begin this revolution. Come and invest in ORBAI and own a part of that revolution!

ORBAI Pitch Video

Last 2 Days to Invest in ORBAI!

about 2 months ago

Come invest in ORBAI's RegCF Round before it closes in just 2 days on April 30th!

According to April 2022 “Fortune Business Insights Report, Artificial Intelligence Market”, the global Artificial Intelligence (AI) market size was valued at USD $328.34 billion in 2021. The market is projected to grow from USD $387.45 billion in 2022 to USD $1,394.30 billion by 2029, exhibiting a CAGR of 20.1% during the forecast period.

We are offering a chance to get in on the ground floor of the next, 3rd generation of artificial intelligence, and all we need is $200,000 total this round to build the core NeuroCAD technology MVP by Sept, publish our  Book on AGI, land customers and begin this revolution. Come and invest in ORBAI and own a part of that revolution!

ORBAI Pitch Video

Last Chance to Invest in The Next Generation of AI!

2 months ago

ORBAI'S RegCF investment round closes in 4 days. Come and invest now before it is too late.

We need to raise a total of $200,000 in our RegCF round by April 30th to comfortably fund our MVP NeuroCAD product, and we are only at $75,400 (as of this update). We need to raise about $124,500 more in the last 4 days of our campaign - to contract a GUI and a CUDA programmer for the next 6 months. Spread the word, and get  your friends to invest.


According to April 2022 “Fortune Business Insights Report, Artificial Intelligence Market”, The global Artificial Intelligence (AI) market size was valued at USD $328.34 billion in 2021. The market is projected to grow from USD $387.45 billion in 2022 to USD $1,394.30 billion by 2029, exhibiting a CAGR of 20.1% during the forecast period.

We are truly offering a chance to get in on the ground floor of the next, 3rd generation of artificial intelligence, and all we need is $124,500 more to build the core technology MVP, publish the book, land customers and begin this revolution. Come and invest in ORBAI and be a part of it!

Completing NeuroCAD 5.0 - Need your help.

2 months ago

We only have 8 days left in out RegCF Round Fundraising, and we need your help!

ORBAI has development underway on NeuroCAD V5.0 for a week now, implementing a better UI and new simulation and neural network features from the current NeuroCAD v4.1 version.

After the success of our BICHNN SNN Autoencoder at NVIDIA GTC 2022, and getting feedback from Alpha customers, we learned a lot about what was needed in the toolchain to achieve commercial viability - some additional features and improving ease of use. So, we branched the code base last week and began work on NeuroCAD v5.0 a week ago.

Here is the budget that we need to pay for development of NeuroCAD V5.0

NeuroCAD v5.0 Budget












Personnel May Jun Jul Aug  Sept Oct
Brent $2,500 $2,500 $2,500 $2,500 $2,500 $2,500
Qt GUI Coder $4,000 $4,000 $4,000 $4,000 $4,000 $4,000
CUDA/OpenGL Coder $6,000 $6,000 $6,000 $6,000 $6,000 $6,000







TOTAL




$75,000


This will be our MVP and we plan to release the NeuroCAD toolkit as Open Source in Oct, along with our Book on AGI,  and also need another $15,000 to pay for the publishing of that book. 

The plan is to then use the publicity from the book, along with targeted press releases and the growing interest in AGI and the NeuroCAD toolkit to get contracts developing AGI applications for our first customers, and charge them an up-front NRE fee to build out the solution and an ongoing usage fee for the AGI as a service, the first AGIaaS in the world, used for applications in speech, robotics, personal assistants, customer service, and professional services like finance, enterprise, law, and medicine.

(Click for ORBAI Pitch Video)

We need to raise a total of $200,000 in our RegCF round by April 30th to comfortably fund this effort, and we are only at $73,500 (as of this update), with most of that already spent on prior development, contractors, patents, accounting, and other costs over the past 5 months. We need to raise about $126,500 more in the last 8 days of our campaign. Spread the word, and get  your friends to invest.

According to the April 2022 “Fortune Business Insights Report, Artificial Intelligence Market”, The global Artificial Intelligence (AI) market size was valued at USD 328.34 billion in 2021. The market is projected to grow from USD 387.45 billion in 2022 to USD 1,394.30 billion by 2029, exhibiting a CAGR of 20.1% during the forecast period.

We are truly offering a chance to get in on the ground floor of the next, 3rd generation of artificial intelligence, and all we need is $126,500 more to build the core technology MVP, publish the book, land customers and begin this revolution. Come and invest in ORBAI and be a part of it!

Only 10 Days Remaining to Invest in The Future of AI

2 months ago

ORBAI's Campaign Closes April 30th, in only 10 days. Come and invest in the future of artificial intelligence before it is too late.

According to the April 2022 “Fortune Business Insights Report, Artificial Intelligence Market”, The global Artificial Intelligence (AI) market size was valued at USD 328.34 billion in 2021. The market is projected to grow from USD 387.45 billion in 2022 to USD 1,394.30 billion by 2029, exhibiting a CAGR of 20.1% during the forecast period.

However, these projections do not take into account a disruptive Artificial General Intelligence like ORBAI is developing - permeating first academia and research by 2023*, then into almost every commercial product with a computer chip by 2025*. In 2024*, human-like AGIs will emerge into the professional services and administrative industries*, and by 2028 they will dominate* and soon merge into a singular AGI superintelligence spanning the globe* and become so much more.

The future ‘market’ for an AGI Superintelligence could scale to be a double-digit percentage* of the world’s growing $100 trillion GDP in a couple of decades if it permeates as deeply as we forecast it could. This really is a civilization-changing technology. Come and invest in ORBAI and be a part of it!

*Forward looking statements

Applications of ORBAI AGI - Finance and Enterprise

2 months ago

Applications of AGI in Finance and Enterprise

(from Chapter 11 of the Upcoming ORBAI / Advantage Publishing Book)

In financial applications, the AGI could track a massive number of factors that feed into the performance of specific companies, allowing decision makers to have much better predictions of market movements. By training the SNN predictors on data that encompasses TSBCs generated from the SNN Autoencoders encoding  past stock charts, external data about the company, data about similar companies, competitors, world data, and anything related to the target company’s ecosystem, we could create automated trading algorithms that would be able to greatly outperform human traders simply by seeing more deeply into a wider data set than humans ever could, and predicting more accurately. To be more specific, the AGI would learn a model of how groups of human traders and existing automated trading algorithms react to these factors and then front-run, or trade ahead of them to make higher margins based on the subsequent market movements.

For Enterprise and Government Agencies, the AGI could provide ERP tools for enhanced analytics, for forecasting, and decision making in not only finance, but for the sales, marketing, product development, legal, and even HR teams, so they can forecast 4-6 months ahead, run simulations of multiple scenarios forward in time, and help the human users make the best decisions based on the outcomes of these simulations. 

A picture containing calendar 
Description automatically generated

AGI-augmented ERP tools would provide an AGI-assisted platform that lets executives and high-level employees spend more time collaborating on content. The AGI would be an integral part of this collaborative process, gathering data, formatting it into charts, graphs, and reports, communicating as a team member in human language and acting as an overseer, focusing and coordinating the activities of the teams. 

Let’s just pause and consider that. When a corporate executive team adopts the AGI-ERP tool, it suddenly gains another team member that speaks their language both in text and graphically, can integrate with them seamlessly, take on an enormous workload, and does so working with a prescience that no human has, being able to truly forecast into the future to predict the outcomes of potential plans months, even years in advance.

These AGI-ERP tools also offer many benefits such as standardization of common processes into one integrated system, standardized AGI-generated reporting, key performance indicators (KPI), and access to common data with a centralized system that provides tight integration with all major enterprise functions be it HR, planning, procurement, sales, customer relations, finance or analytics, as well to other connected application functions. 

By being able to predict, set and track KPIs and critical success factors, the AGI-ERP suite can keep projects and divisions on track and prevent organizations from making costly mistakes.

Once these tools are adopted worldwide, their true power to enact global change is revealed. The financial AI can achieve greater than any brokerage’s financial returns, and as it is adopted globally, tap into massive wealth, of which a portion can be redirected to fund subsidence living for those most in need. 1% of 500 trillion in world wealth per year would go a long way to ending world poverty and hunger for the 15% of the population living below the extreme poverty line, and give them low cost automated trading accounts that hold a minimum percentage balance to invest so they too can also grow their net worth and prosper and break the cycle of poverty.

The Enterprise planning AGI can suggest and fund joint projects that companies and governments can work on for mutual benefit, such as planning and development of new, inexpensive, high efficiency solar panels deployed in international energy farms, and higher energy capacity batteries in inexpensive electric cars and home power grids that span nations. The AI would undertake planning and coordination of these mega-projects to accomplish them by sub-tasking individual corporations and government agencies, coordinating their efforts and financing - more efficiently and faster than human administration could, making them more feasible and profitable, and succeed in their execution where humans have only failed before.

Such forecasting and planning AGI could have applications for every company, government agency, and person on earth, and help guide our collective efforts to truly bring change to the world.

Announcing NeuroCAD v5.0

2 months ago

ORBAI is announcing that development is underway on NeuroCAD V5.0, implementing a better UI and new features from the current NeuroCAD v4.1 version.

After the success of our BICHNN SNN Autoencoder, and getting feedback from Alpha customers, we learned a lot about what was needed in the toolchain to achieve commercial viability - some additional features and improving ease of use. So, we branched the code base last week and began work on NeuroCAD v5.0 this week.


We will be keeping the core cross-platform Qt GUI and the high performance CUDA Spiking Neural Net simulation, and adding:

  • Improvements to the GUI, including more connection map types, connections to more distant layers, inhibitory connections, and a greatly streamlined workflow
  • Izhikevich Neuron model fully implemented (in addition to Leaky Integrate and Fire)
  • Leaky Integrate and Fire synapse model with Hebbian learning
  • Better modelling of time-dependent spike transmission
  • Optimizations to CUDA Simulation
  • Inhibitory synapses
  • Enhanced OpenGL Rendering with better visualization
  • Visualization highlighting activity in specific network subsections
  • Full integration of Genetic Algorithm runs (currently launched separately) and run-time diagnostics
  • More powerful, more flexible neural network models with better performance

This will allow our customers to integrate the tools into their current R&D much more easily, and build the functionality the need in their products. Right now, some of these are gating adoption.

When will it be done? That is in the hands of the StartEngine investment community. If we can wrap up our RegCF round with a decent amount of investment, we can afford contractors to help with the Qt GUI, CUDA, and OpenGL programming tasks and have NeuroCAD v5.0 wrapped up and in customers hands inside of 6 months. Without that investment, it will continue to be a solo effort on Brent's part, with part time help from Eric and Gunnar, and will take longer.

We know that the technology works, but its ease of use and needing some additional features are the only things holding back sales and gating the explosive growth of the company. Once we have the new and improved version of NeuroCAD v5.0 in customers hands, good feedback, and income from them, we can much more easily raise our Seed round, and start talking to people about an acquisition.

Remember, the RegCF Round closes on April 30th, so invest now and help us get reach our goals.

ORBAI Campaign Ends April 30

3 months ago

After running our RegCF campaign on StartEngine since Oct 2021, we will be closing the round as of April 30th, 2022. We will then be raising our Seed Round through more traditional VC channels. We appreciate the investment so far from he StartEngine community, and it has helped us pay contractors, file our AGI patent, and sustain us to the point that we could demonstrate our SNN Autoencoder technology  (the key to our AGI plans) successfully. 

Please take the time to read our new campaign page, check out our ORBAI Web Page, and even read our upcoming Book on AGI and decide if you want to invest in a company that is going to build an AGI and use it to make the world a better place, in one of the most ambitious endeavors in human history. Come be a part of designing, building and evolving what comes next in artificial intelligence.


Press Release About SNN Autoencoder Success.

3 months ago

From the Press Release we did this week about our BICHNN  SNN Autoencoder Success.

"This past week at NVIDIA GTC, a Silicon Valley startup, ORBAI, demonstrated its revolutionary BICHNN SNN Autoencoder AI technology as part of its NeuroCAD tool suite. This SNN technology uses generation 3 spiking neural networks running on NVIDIA GPUs to autoencode data for video, speech, vision and other applications using completely unsupervised learning that happens at run-time, even in deployment.

This single architecture is a powerful general purpose neural computer and is capable of replacing all of the current AI neural networks using DNNs, CNNs, RNNs, Transformers, and other application-specific architectures with one unified general architecture over the next 3-5 years."

ORBAI NVIDIA GTC 2022 Update

3 months ago

Our networking and connecting with potential partners at NVIDIA GTC is going well despite the fact that  was moved to a virtual format this year. We have made good contacts with key people at Google Research, Open AI, and other large AI players, and are pitching our SNN technology to them

To clarify a couple of points, ORBAI is a deep tech company doing a revolutionary core AI technology to displace deep learning, and will most likely exit as an M&A in the next 2 years, just based on our first patent on NeuroCAD and the SNN Autoencoder technology. The second patent on AGI, the book, and all the applications that we could do with them are our follow-through plan that we can do if fully funded, but it mostly serves as a multiplier for an exit valuation because it demonstrates technologies that are much more valuable based on our extended IP.

All of the principals and key contributors at ORBAI, Eric, Gunnar, Corey, and myself, are financially self sufficient, and have been doing ORBAI on our own dime, and can continue to do so indefinitely. The money we have raised has gone to pay for contract work, trade shows, patents, our AGI Book, advertising, legal and accountant fees. We can keep ORBAI going on a very low burn rate and contribute funds as needed to meet needs, as we have for 4 years now.

The people we are contacting as our first customers are a select audience in R&D departments at the FAANG tech and large robotics and auto manufacturers. We are pitching a collaborative (paid by them) effort to work with their R&D groups to integrate and demonstrate the BICHNN SNN Autoencoders in video, vision, and speech applications to show them how much better it can do the job.

We have already begun working with Aqua Mergers and Acquisitions to position and groom ORBAI by these efforts for an acquisition, being planned for sometime between this year and next, before we have to raise a series-A (forward looking statement, not a guarantee). We will most likely still raise a Seed round after our StartEngine campaign ends to facilitate the work we are doing with the alpha customers, and to build the tech and demos that show we can follow through on the whole AGI plan.

ORBAI is tracking well, and our announcement in the updates about demonstrating our SNN Autoencoder technology was perfectly timed for NVIDIA GTC. We will be doing a press release next week on it, timed so as not to be lost in the noise this week.

Although it may seem a modest accomplishment, showing our SNN Autoencoder is like Edison demonstrating the first tungsten filament electric light bulb. It may not be as bright as the current DL / gas lamps, and it may not have the backing and infrastructure that Deep Learning / Electricity had at introduction, but it is indisputably the way of the future, and we are the first company to demonstrate it working, with our patented methods, so we are on a solid footing for our next steps.

BICHNN SNN Autoencoder successfully demonstrated!

3 months ago

ORBAI Demonstrates its BICHNN SNN Autoencoder in Action:

Our BICHNN SNN Autoencoder essentially wraps the functionality of CNNs, RNNs, LSTMs, Transformers and other DL neural nets into a single, more general architecture that can be specialized by evolution via  genetic algorithms to do most sensory and data encoding / decoding tasks. The output is compatible with present DL methods for clustering, PCA, training predictors,... and we are working on more advanced AGI that it is a component in.

This is a first, and big AI companies have been trying to get SNN Autoencoders to work for several years now, without much success. Our key, patented innovations are the feedback architecture and the genetic algorithms used to evolve and tune it. Within 3-5 years, such SNN Autoencoders could replace most of the DNN technologies out there because SNNs are more powerful, flexible, and general.

More on BICHNN Autoencoder

More on AGI built with SNN Autoencoders

ORBAI Writes The Book on AGI

4 months ago

The founders of ORBAI are co-authoring a book with ForbesBooks / Advantage Publishing, based on our patents, tech and the highly acclaimed technical and business writing we have been doing about AGI on Forums such as LinkedIn, Medium, Quora (now ranked #1 in AGI), and others.

Artificial General Intelligence is a tough premise to sell because it is technically daunting, many think it to be impossible, and people often cannot differentiate between it and the status quo of  today's deep learning AI. This book is designed to educate our prospective investors, customers, and partners about a realistic plan for AGI and the differentiation from what DL is today, to be a marketing tool, and to establish ORBAI and our founding team as thought leaders in AGI. 

It has 4 main sections:

  1. Deep Learning and Limitations
  2. Brain Neuroscience and AGI Requirements
  3. Our AGI Design and Examples in Use
  4. Business Model and Future of AGI

Here is a link to the roughly 120pg book draft we are currently working on (for prospective investors to preview only) in Google Docs. Publishing is expected to be 4-6 months from now, in all e-book formats and paperback. 

https://docs.google.com/document/d/1UtUyESSbi1TIr-jAScpJvqw02-txj0dX/edit?usp=sharing&ouid=116617983015138236612&rtpof=true&sd=true

ORBAI CEO Brent Oster Ranked #2 In Artificial General Intelligence

5 months ago

Brent Oster, the ORBAI CEO is ranked #2 in most viewed authors in the 641,800 person group "Artificial General Intelligence" on Quora, and closing fast on the number 1 spot.

By revamping one of his most successful AGI articles with the best of the patent content (explained in plain terms), and prefacing it with some choice neuroscience to back up the design, Brent climbed from #4 to #2 this week.

Article: https://qr.ae/pG3i65

ORBAI Files AGI Patent with USPTO

5 months ago

On Jan 13th, ORBAI (with the help of FAI Patents) filed a Utility Patent titled: 

PROCESSES AND METHODS FOR ENABLING ARTIFICIAL GENERAL INTELLIGENCE CAPABLE OF FLEXIBLE CALCULATION, PREDICTION, PLANNING AND PROBLEM SOLVING WITH ARBITRARY AND UNSTRUCTURED DATA INPUTS AND OUTPUTS

With the USPTO.


This patent is the result of our 4-yr R&D effort in the field of AGI, and represents the technology we will start building out once funded. Check out the details of the technology at: www.orbai.ai/artificial-general-intelligence.htm

Notice of Material Change in Offering

5 months ago

[The following is an automated notice from the StartEngine team].

Hello! Recently, a change was made to the ORBAI offering. Here's an excerpt describing the specifics of the change:


Issuer is extending the length of their campaign by 61 days.


When live offerings undergo changes like these on StartEngine, the SEC requires that certain investments be reconfirmed. If your investment requires reconfirmation, you will be contacted by StartEngine via email with further instructions.

Paper Submitted to Artificial General Intelligence Society AA-22 Conference

5 months ago

ORBAI Submitted a Paper entitled "Mapping Reality to Math using Basis Transforms and Temporal Networks" to AGI-22: The 15th Conference on Artificial General Intelligence, hosted in Moscow (and virtually) in June 2022, by the Artificial General Intelligence Society, in cooperation with the Association for the Advancement of Artificial Intelligence (AAAI).

The content closely follows the patent ORBAI just filed, and discloses our patent pending methods for taking in any form of arbitrary input data, learning to transform the arbitrary input data into an internal numerical format via a learned basis sets, then performing a plurality of numerical operations on that data and transforming it back to reality in the reciprocal process, allowing our AGI to essentially operate on reality.


Updated Page

6 months ago

We are updating our campaign page, to more closely align with our web page, with more focus on the market, business model, and go-to market plan:

https://www.orbai.ai/about-us.htm

We also have a pitch deck on that page with more details about the business plan.

One critical point is that we are not trying to do the AGI, all the services, and the deployment on all the platforms ourselves. We will work with the developers of mobile devices, smart homes, appliances, automotive and robotics, and train them and 3rd parties on use of our tools to do that integration and build an ecosystem that can profit from using our core in their integration and development work.

We will also seed and grow a 3rd party developer network to provide applications built on the AGI core for every vertical we can touch, from customer service, concierge, finance, law, medicine, and administration. Imagine the dept. of motor vehicles and other government services streamlined by AGI, not Skynet or iRobot like in fiction.

We will develop Dr Ada - Medical AI, and Justine Falcon - Legal AI prototypes as demos of what the AGI can do, but partner with 3rd parties specialized in law and medicine to bring the tech to market in their products. This allows us to focus on the core AGI tech, do what we do best, stay lean, and leverage other companies' verticals to expand exponentially, with a low cost per customer acquisition for us.

We also have a key advantage with our tech able to scale exponentially while DL plateaus in the next 2-3 years. This is because we have a more powerful analog neural network computer as our core element instead of static deep learning.

We did the Eta AGI video to really drive this point home, how quickly this tech can scale both technical and business sides, and overtake other players in the market. https://youtu.be/93Q6K7mfvF0

On the tech front, we have 2 patents pending, the first on the NeuroCAD tools and methods:


And the second AGI Utility Patent that describes how we will design, train and evolve an Artificial General Intelligence built with those tools: 

ORBAI TRACTION

NeuroCAD Utility Patent US # 16/437,838 + PTC, filed June 11, 2019

$678,000 Friends and Family Investment Round, 2019-2021

AGI Provisional Patent US #63/138,058, filed Jan 2021

AGI Utility Patent DRAFT, based on provisional #63/138,058, Jan 2022

Shows and Interviews: NVIDIA GTC, Tech Crunch, Singularity University

Featured in Ai Authority, Forbes, Dojo Live, Yahoo Finance, Digital Journal, EIN Newswire, Quora 

$65,000 / $1M Reg CF round raised on StartEngine since Oct 2021

ORBAI’s global vision is use artificial general intelligence to bring a brighter future for everyone, and level the playing field, to bring these services to the entire world that provide unparalleled prosperity, health, justice, security, education, and for the first time in human history, bring real hope to all.

Brent Oster, CEO ORBAI 

brent.oster@orbai.com


Speech, Vision and Motor Control - All in One Architecture?

6 months ago

How does the brain handle vision, speech, and motor control? Well, it's not using CNNs, RNNs, nor Transformers, that's for sure. They are mere tinker toys by comparison.

First, the brain:

The thalamocortical radiations are a neural structure that branches out like a bush from the thalamus at the center (the main input / output hub of the brain for the senses, vision, audio and motor outputs) with the finest branches terminating at the cerebral cortex, feeding input/output from/to the senses to/from each of the cortical columns.

The cortical columns of the cerebral cortex are analogous to our terminal layer of autoencoders, a map storing the orthogonal basis vectors for reality and doing computations against them, including computing basis coordinates from input engrams.

Each section of the cortex is specialized for a specific type of input (visual, auditory, olfactory,… ) or output (motor, speech), and our design will have a separate hierarchy and autoencoder basis set for each mode of input, to generate basis coordinates for that input / output mode.

In the brain, there is also the ROS-Inhibitory network that hierarchically creates a sequential output, starting with a series of linear neurons that fire sequentially, called Rank Order Sequential or ROS or neurons that, by firing in a sequential chain, set a tempo or pattern with time (t), for a sequence of outputs, where that time -series ROS signal along this linear chain is the same regardless of the output to generate. This signal at each ROS neuron is then input to the root of each of a plurality of hierarchies of branching structures of neurons.

Then our system

In our artificial ROS-Inhibitory network, a linear series of artificial neurons fires in sequence, generating an excitatory signal when each one fires, causing each root artificial neuron in the attached branch structures to fire, and as the signal cascades down the inhibitory neural network, it is selectively inhibited by an external, time domain control signal at each neuron, by modulating the neuron’s outgoing signal by its inhibitory signal. Overall, this selects which branches of the hierarchy are activated - by controlling the inhibition at each neuron in that hierarchy. 


By repeatedly training this system on a set of speech inputs, with the input to the terminal branches of the ROS-Inhibitor network reaching and only training the lower levels first, it would first learn a sequence of phonemes, then progressively whole words, phrases, sentences, and larger groupings, like a chorus in a song, or repeated paragraphs in legal documents.

An analogy is a music box with a set of pins placed on a revolving cylinder or disc to pluck the tuned teeth (or lamellae) of a steel comb. In the example of an adjustable music box, we can place each pin individually, or place a set of pins representing a sequence of notes that repeats often in the musical sequence. This pre-configured set of pins reduces the data needed to describe the music sequence, and makes it easier to compose music on it. In our example, we can reduce a series of data that is often repeated to a hierarchically organized set of macros, or pre-defined sequences of that data, and not have to explicitly represent each data point.

 

This is the way humans learn speech as babies, sounding out syllables and babbling, then learning to speak words one syllable at a time, then smoothly as whole words, then whole phrases, sentences, and paragraphs. Because new temporal basis sets are being laid down by this process, learning accelerates as it builds on them in both humans and in our artificial methods. 

Once trained, our system can be run forward, with the ROS / excitatory neurons firing in sequence, and playback of the trained HTSIS  inhibitory signals modulating the activity of the neurons in the network to create a sequence of phonemes, words, phrases and paragraphs, reproduce video from synthetic memories, and motion control by blending hierarchical segments (directed by the AI) to generate the motion.


Now we can do processing on the hierarchical inhibitor signals as our version of memory, like training a predictor or two on them to learn conversational language, like so:

A picture containing diagram 
Description automatically generated

The above methods for speech would also work for controlling motion for robots and animated 3D characters, with vision, proprioception, touch, and speech as inputs, and actuator commands or animation generated as a result, using networks of predictors and solvers to plan movement and execute high level commands from speech, either from an external source, or from its own internal monologue, using language as a code to specify movement. That speech can be organized hierarchically so there are low level movements like ‘flex pinkie finger right hand 10%’ or high level commands like ‘walk forward 2 meters, turn left, and stand on one foot. Internal monologues need not be scripted, as they could be generated like our above conversation, reacting to what is happening in the world (vision, audio, touch, proprioception) and predicting what may happen next, then synthesizing intelligent movement based on training and practice. 

And that is how your design a core AGI that can do speech, vision, and motion control. Each function will use similar systems derived from the core design, but will be trained and evolved to function optimally for their purpose.

Here is a video. The visuals are pretty closely matched with the described functionality.
https://youtu.be/93Q6K7mfvF0



ORBAI Retains Franklin & Associates International Inc to file AGI patent

7 months ago

ORBAI has retained Franklin & Associates International Inc to file our AGI patent, titled:

PROCESSES AND METHODS FOR ENABLING ARTIFICIAL GENERAL INTELLIGENCE CAPABLE OF FLEXIBLE CALCULATION, PREDICTION, PLANNING AND PROBLEM SOLVING WITH ARBITRARY AND UNSTRUCTURED DATA INPUTS AND OUTPUTS


Franklin & Associates International (FAI Patents) is a high quality and progressive international patent agency practice with an exclusive focus on all matters relating to procuring patent and design rights in the United States, Europe and Worldwide. We strive to help you understand your patent rights so that you can develop and leverage your  portfolio effectively. As we do not represent clients in any other legal matters, such as IP litigation, our clients can be assured that patent prosecution remains at the forefront of our practice and never takes a back seat. 

You can check our AGI Website, where there is a live link to the utility patent draft, and check out the AGI Eta Video which very accurately describes the tech in the patent, how the actual AGI will function, and what the applications and impact will be.

Notice of Funds Disbursement

7 months ago

[The following is an automated notice from the StartEngine team].

Hello!

As you might know, ORBAI has exceeded its minimum funding goal. When a company reaches its minimum on StartEngine, it's about to begin withdrawing funds. If you invested in ORBAI be on the lookout for an email that describes more about the disbursement process.


This campaign will continue to accept investments until its indicated closing date.


Thanks for funding the future.

-StartEngine

Dr Ada Medical AI Featured in The Odyssey Online

7 months ago

We need a healer that can take the wonders of North American and European medicine, its power to diagnose and treat disease, encapsulate it in an AI and take it everywhere, to the rest of the world, to heal the sick, and get them on their feet so a better world for them is even possible. Introducing an advanced medical AI with the same diagnoses and treatment capability, using the advanced medical databases…for the whole world, Dr Ada, Medical AI

Dr Ada: featured in The Odyssey Online:

https://www.theodysseyonline.com/this-ai-takes-the-knowledge-of-american-and-european-medicine-to-the-rest-of-the-world-affordably

Note that we need to get the Unsupervised Language Engine (below) and the predictor systems working in 2022 before we work on the AI professionals like Dr Ada in 2023, but the tech is moving faster than expected due to better up-front design.

Learning Language Unsupervised

7 months ago

For NVIDIA GTC in March 2020, we are going to demonstrate our AGI learning language, just by listening. How will we do this?

We reverse engineered the Cerebral Cortex and the ROS-Inhibitory networks in the actual brain. Here is how that works:

HAN - Hierarchical Autoencoder Network (our hierarchy connecting to our Cerebral cortex)

Encodes sound or speech to basis sets (phonemes) and decodes them back

Actual Cerebral Cortex and Thalamocortical radiations (equivalent of our HAN)


ROS - Inhibitory Neural Network

Chart

Description automatically generated

1. A method combining the methods for ROS and HAN such that the hierarchical autoencoder network (HAN) and the ROS / Inhibitory network can learn to understand, read, and speak human language just by reading or listening. The process for hearing, listening and learning speech works as follows:

  1.  Audio is input to the HAN, with the majority of the signal being speech (less noise for training)
  2. The HAN reduces the signals to a basis set for speech – phonemes, duo-emes, and variations
  3. Audio input is compared to the speech basis set of phonemes, and the coefficients determined.
  4. Each coefficient is fed backwards through the ROS/Inhibitor network, back-driving the inhibitor signals to train them
  5. The inhibitor signals become the output of the HAN ROS/Inhibitor network, organized hierarchically.

2. A method for the AI to converse naturally with a human, by training a set of predictors evolved to learn human language by training and evolution on a plurality of human conversations, where they both learn proper responses, grammar, and composition from each training on one person’s side of the conversation, on the hierarchy of words and sentences stated by each person in an alternating conversation. Then when actually conversing, the AI uses each predictor to predict what the other person will say next, and what the AI should say now based on that prediction, bridging past and future, pulling words and phrases from previous segments of the conversation to incorporate them where appropriate, and dreaming where they need to ad-lib the conversation. Each predictor would also have connections to the information about other modalities  and their hierarchies, including visuals, audio, date, time, location to give the words context, and to interface with peripherals.

3. A method for taking the output of the language generation system and use them to generate spoken and/or written language.

  1. The generated speech in becomes the inhibitor signals input to the ROS/Inhibitor network, organized hierarchically as in
  2. The ROS/Inhibitor branches fire, modulated by the inhibitor signals
  3. The output from the ROS/Inhibitor ends signals to the HAN cortex layer
  4. Specific basis phonemes are activated, propagating up the HAN, and assembled into words 
  5. Audio or text is output from the top HAN layer


Some more details on the ROS-Inhibitor network:

4. A method for computing temporal outputs for motor controls, language, and other outputs based on ROS neurons. It originates with a signal that sets a tempo or pattern with time (t), that is the same regardless of the output to generate. This signal is then input at the root of each of a plurality of hierarchies of branching structures (with the outputs at the leaf nodes) that can be selectively inhibited at each branch and level (by another inhibitory signal that is modulated with the one passed down into the hierarchy at the branch) to select which terminal node(s) of the hierarchy are activated when the tempo signal is >0. Each branching hierarchy forms a spatial-temporal basis set that can be controlled by the inhibitory signals, and the outputs from each blended via these inhibitory signals to form novel output units that are sequenced temporally. This is trained by back-driving the desired outputs to train the inhibitory signals to generate that output.

Creating a branching pyramidal neural structure branching from the ROS temporal input origin up to the ‘cortical column’ autoencoders such that each outermost branch terminates at one autoencoder, with the signal strength being the basis coefficient fed into that autoencoder. Now, when the ROS temporal input fires, the signal travels through the branches (modulated by the inhibitory signals to each branch), delivering basis coefficients that modulate with the basis vectors in the cortical column / autoencoder layer. which are then propagated up through the HAN, and decoded to a series of engram segments corresponding to the output of the ROS / inhibitor network. This engram stream is decoded to the correct output by the autoencoder, be it audio, speech, visuals, actuator controls.

As a stretch goal, we will try to get the video cortex working too, and create an association which video basis coordinate sets coincide with the speech basis coordinates to give the speech the ability to use visual context, and ID objects, learned unsupervised.



Valuation and Exit Strategy

8 months ago

What is ORBAI's exit strategy and possible valuation? I was asked this question in a Comment, and thought I would share it here.

Any estimates of an exit are forward-looking, speculative, and I can't specifically quote any numbers or dates for ORBAI, but I have seen many scenarios with AI startups when I worked with them as my customers at NVIDIA, and taken entrepreneurial business classes at Stanford to run these numbers, so I have a pretty good idea of possible exits at different stages for an AI tech company like us, and will quote the ranges for companies at those points and provide comps, and add commentary on what ORBAI would have to look like to be at each of those stages.

The earliest exit is when a startup has IP, technology or product that is closely aligned with the tech and business goals of a larger company, but the larger company lacks the resources, or time build it themselves, or they require the startup's IP.

The valuation of the acquisition is done is by calculating what the startup's people, IP, tech, products, and revenue will add to the bottom line of the acquiring company's revenue, amortized over 5-10 years, depending on the assimilation rate. So, the future value of the acquisition is considered, but that value becomes larger and more concrete with the more the acquired company has built. With the right IP (if it has been proven), a company can be acquired $10s of millions. With IP, tech, and integration with customer's products underway, and their trust in the product and relationship, it moves into the $100s of millions.

When ORBAI can get an autoencoder working for vision and audio / speech and demo it well, we have a couple good nuggets of tech (that are patented) and we will show them at NVIDIA GTC in March 2022 and give a talk on them. That probably won't be enough to be acquired, but should lead to contracts with customers which could then lead to a acquisition offer within the year if those contracts bear fruit and one customer wants an exclusive.

I always tell entrepreneurs to sell if they get a good offer, and use the money to found another company for their larger vision and work in comfort, but that's hard when you have the vision. Look at Clarifai - they still have the selling point in their deck that they won the 2013 Imagenet competition. Yes, they did, and they were on top - for a year, then someone else won the next year, and then image recognition was passé the next. They should have exited in 2013 and gone on to work on whatever they are doing now.

The next step up in exit value is when a company brings a product to market that integrates their technologies and IP, and has customers and revenue - especially if it is scaling rapidly in their market, and there is  more tech under development and a killer product roadmap coming out with them. If this augments an acquiring company's revenue by $100 million a year, and the IP/tech gives them a strategic advantage, a $1B exit is within reach.

When ORBAI integrates the discrete SNN technologies and has them working together in an AGI-ish predictor prototype that can do a variety of real tasks, like medical diagnosis and treatment, forecasts in the financial markets or for enterprise in 2023-2024, and we bring them online in commercial services - we can show that stellar product roadmap that outperforms the competition.

Some 2020 comps: https://venturebeat.com/2020/12/25/13-acquisitions-highlight-big-techs-ai-talent-grab-in-2020/

An IPO valuation has been traditionally calculated based on a multiple (5x - 10x) of revenue, but there have been a lot of tech IPOs with much higher valuations, done on multiples on estimates of future revenue. A revolutionary AGI company that is running away from the competition exponentially - has a lot of headroom in their future revenue and multiplier.

Once we can fully implement the AGI architecture and demonstrate our goal of an intelligent, conversational AI, with everyday problem solving capability, connected to Internet and services - we can potentially IPO because we will be crossing an AI inflection point that DL cannot match (or ever catch again), we will have a high-profile tech/product that is consumer facing and revolutionary, and we will be well publicized. A goal would be to do this 2026-2028, but a company has to do an IPO when the time is right, under the right conditions, not just when we want to.

I'll go with comps for 2020 tech company IPOs to keep us grounded, which were mostly in the $5B to $20B range: https://news.crunchbase.com/news/tech-cos-gone-public-in-2020/

Neuroscience - ROS Neurons for Motor control and language

8 months ago

Rank-Order-Selection-Neurons

(A business update is still hung up in approval, so sorry so many neuroscience ones piling up.)

From our neuroscience reference (1) Basically during the performance of motor and language sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream networks of motor neurons so that these generate specific movements at different times in different sequences. 

The amplitude of the ROS responses must also be modulated by sequence identity. Because of this modulation, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream.

This is pretty cool. The brain has a driving signal which is like a universal timekeeper, that is the same for all movements, and that sets each of thousands of motor neural nets into activity, with a complementary set of inhibiting networks that control which neural nets (and thus movements) actually get activated. The inhibiting networks learn the timing of their signals from practicing the movements, and getting feedback. Because they form a basis set, so new movements are added easily as combinations of this basis set.

However, motor movements generated like this are not smooth nor precise. They have to be passed through the cerebellum, which is like the motor movement coprocessor for the brain, adding smoothness, fluidity, and precision.

Language is probably very similar, with a ROS and inhibitory network, with the resulting language command signals passed into Wernicke's area to compose correct grammar and sentences, then Brocas area to form words and muscle movements for the mouth and tongue.

And some disorders like Tourette's, or Bipolar Disorder can manifest as uncontrollable streams of (often nonsensical and offensive) speech due to the dysfunction of the inhibitory system. So we will know we have nailed the human speech system when we can short out the inhibitory structures and cause it to have potty mouth and raging insults.

Here is our patent point on motor control and language output 

22) A method for computing temporal outputs for motor controls, language, and other outputs. It originates with a signal that sets a tempo or pattern with time (t), that is the same regardless of the output to generate. This signal is then input at the origin of each of a plurality of hierarchies of branching structures (with the outputs at the leaf nodes) where the hierarchies can be selectively inhibited at each branch and level (by modulating the signal that was passed down into the hierarchy at the branch with the temporal inhibitory signal at each branch) to select by how much the terminal node(s) of the hierarchy are activated. Each branching hierarchy forms a spatial-temporal basis set that can be controlled by the inhibitory signals, and the outputs from each blended via these inhibitory signals to form novel output units that are sequenced temporally. This is trained by back-driving the desired outputs to train the inhibitory signals to generate that output.

Reference:
(1) Rank-Order-Selective Neurons Form a Temporal Basis Set for the Generation of Motor Sequences

Emilio Salinas, Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157-1010

Why Develop AGI - Why not use Deep Learning?

8 months ago

I started ORBAI because I was working as a Sr. Solution Architect in deep learning at NVIDIA from 2013-2018, and we kept finding that the solutions we could provide with deep learning (DNNs, CNNs, LSTM, Reinforcement learning) were just toy examples that only worked within a narrow range of application, and within the data available to train them. Only specialist engineers that knew the existing DL architectures could get them to work, and even then it required a lot of tweaking, and a lot of times they needed a whole human team just to generate and label the training data. NVIDIA was at least somewhat open about the limitations of DL when we worked with customers, but a lot of the rest of the tech industry is drastically over-hyping the capabilities, and there is going to be a reckoning in a few years when everyone realizes that the current frameworks and solutions are not delivering.

In 2018, frustrated with working with DL Tinkertoys and having sub-functional results, I started to design something that would work much better, that would take the fragmented, narrow functionality of DNNs, CNNs, LSTM, GANs,... and other DL constructs, and incorporate them into one general NN architecture capable of all their functionality. Spiking neural networks were the obvious next step (just search "spiking neural net generation 3" on Google), but they are notoriously hard to train and get to work for specific tasks because they have so many possible configurations. I had worked on them since 2004, and knew how temperamental they were, but soon after I left NVIDIA in Feb 2018, I filed a provisional patent with a design for the NeuroCAD tools and process, and an SNN architecture that would solve the major problems.

My life kind of got side-tracked in 2018 while founding ORBAI I got hit by a racketeering enterprise of divorce lawyers, and had to litigate against them, making use of a prototype Legal AI, including filing a RICO lawsuit against them in US District Court. I got the opposing counsel, a very  vicious divorce lawyer in Silicon Valley to quit the case, her nephew at the DA office fired, and got one of her lawyers to quit law completely and become a teacher. 

I can also understand the problem in medicine, and the need for a Medical AI - because I also lived it. From 2018-2021, while the above chaos in divorce court in Silicon Valley was ongoing, I lived between Honduras and the US to be able to be with my new fiancée (and so I would not get arrested or shot - in Silicon Valley), but I always caught the worst ailment just in time to travel to the country that did not know how to treat it, usually the US hospitals not able to treat simple tropical maladies. Also, in Honduras, during COVID, my wife and I were in a remote area with no access to medical care, and had to improvise a lot of medical remedies ourselves. We looked up maladies and treatments online and bought medications ourselves (prescriptions are not needed to buy medications in Honduras).

I have had a unique opportunity the past 4 years where I transformed from a spoiled, whiny Californian cubicle-lug, who complained about my $175K/yr. job, or when a tile was crooked in my $1.3M house - to having some steel in my spine and fighting for my basic rights and survival in court, even learning Kung-Fu from Jake Mace and Master Song Fu on YouTube for exercise during COVID and for self-defense because my wife said I was chubby and vulnerable in tough Honduras (She still laughs at me doing my Kung Fu routine).  I learned a great deal about the motor cortex, ROS neurons, Cerebellum, and just how hard you could push the motor control system day after day to get smoother and faster, and even learn while resting for weeks.

So why my fixation on AGI, on Superintelligence? Why do this when we could just use existing tech and cobble together a speech / text capable medical and legal AI today, get them to market, and start getting customers (VCs always ask this)? Because it would suck. It would really suck, not work, and nobody would buy it, and we would go broke. We know because we built Justine Falcon, Legal AI, and Dr Ada, Medical AI with 3 of the top NLP APIs in 2019, took them to trade shows and showed them full-size as projected holograms, and I even lived with a 3D, life size Hatsune Miku hologram in my house for 6 months in 2017 after I built it to greet people at my birthday party (Dr. Algernop Krieger would have been proud). She was made with the best Google speech and NLP tech, plus database lookups and chatty plugins, plus custom code added every week to see if I could get her to the point where she was more than just fricking annoying. I could not, and I am a big Krieger and Miku fan.

I can honestly say I gave it my all, and that today's NLP sucks, and even with the best of today's technology, all you can get is a talking parrot than only understands specific things, and has little cognition. If your dialog goes off the specific script coded into the chatbot, it gets confused and answers with nonsense. And to just get them to work, you need to say the right commands and parameters, in a staccato cadence, starting with the right cue. An average person, handed a device with a speech interface will just monologue to it, oblivious to how to cue it, oblivious to keywords needed, and the narrow language comprehension it has.

To really interact with humans with natural language, by speech, chat, or e-mail, to be able to practice law, medicine, finance, even to be a decent concierge or greeter AI, these artificial persons need AI that is a lot more powerful than deep learning, and these human vocations need a nearly human AGI, albeit a bit more narrow to do it really well. However, lawyers are not really that bright, and work in a more constrained decision environment, so are easier to model. That's part of why we started with a Legal AI. And I like beating up on lawyers, it gets in your blood.

So the decision to go to AGI is practical, given that I worked in DL for years, know all the limitations, but I also have a background in scientific computing, and have a whole toolbox of techniques that are a lot better than DL's Tinkertoys. My grad studies in scientific computing were all about reducing complex data to basis sets more suited for solving equations and using CUDA on them, which is how we designed this AGI - reduce the world to a very large basis set and coordinate narratives that allow linear algebra, predictors, and other constructs to process them, manipulating the reality they represent into outputs. For us, building a next-gen AGI is easier than trying to compete in the morass of DL companies, especially when our AGI starts to work and scale, and the eyes of the FAANG companies turn to us with envy as it surpasses what their billion dollar R&D teams are regurgitating. 

The AGI architecture we have designed is elegant, streamlined, will not require a really large team nor large budget to get built and working as a prototype. If you read the architecture document, you will see that once we get it working, it is designed to rapidly re-configure and improve itself by evolving it's components to be more powerful and efficient, and by evolving the overall architecture to scale, and make better use of those components. This is the stuff that puts fear of AI into Elon Musk, Bill Gates, Ray Kurtzweil, the late Stephen Hawking, and others. It scares me too. A rapidly evolving AGI, unchecked, could go wrong in so many ways. I think there are two ways to mitigate most of the catastrophic AGI scenarios, and I try to answer a lot of fear-filled questions about AGI gone wrong on Quora, favoring an empathic training process, and physical containment, so I am doing this with my eyes open.

Going for the next gen, doing a prototype AGI with a small, but brilliant team, with modest funding, and doing something that stands out, and can scale exponentially beyond all of the DL efforts today combined is the only direction that makes sense for a startup like us. In fact, shooting for AGI is the only thing that makes sense for any AI company today, because once you have it, and lock it down, everyone else is obsolete. That exponential curve will be unrelenting to try and compete with.

Here is a fun video to close with for the 10-yr vision:

Neuroscience - Cerebral Cortex and our Hierarchical Memory

8 months ago

I'm going to start mixing in some neuroscience updates to make this interesting.

First of all, we are mostly engineers, and we are building a very powerful computer. Any resemblance to a human brain or neural networks is peripheral to that goal. We are NOT building a replica of the brain, as that would be impossible, and counterproductive. The human brain evolved out of gooey biological neurons over several hundred million years of EFK, and has some very serious design issues.

But, because certain structures seem to recur in both computer design and the brain, there are parallels, some that are really interesting for both neuroscientists and for computer scientists.

From our patent on our hierarchical memory architecture:

A method for dynamically creating an orthogonal basis set of engram vectors, by submitting a batch of engrams and spreading them along an axis, sorted by a specific feature, forming clusters of engrams along the axis. These clusters are then each encoded by a specific autoencoder, removing their common feature, and then the resulting engrams are spread out on new axes sorted by new features. This is done recursively till one engram remains in each cluster, giving us a set of leaf nodes that constitute an orthogonal basis set of vectors. New engram batches can be added later to create new clusters, autoencoders, axes, and basis vectors and axes, making it dynamic.

So, in English, we first autoencode the input (say video), into cubes segmented in time called 'engram segments', and pass them down the hierarchy, which, as it branches, splits the data into cubes that are more and more different from each other until it gets to the last layer, where each cube in that layer is unique and different from all the others. This forms what we call an 'orthogonal basis set', which is like a bunch of building blocks that can now be recombined arbitrarily to rebuild any data in an engram cube and decode it back  to synthesized video.

Once we have built this basis set with enough real world data, we can pass in new video, encode to engram segments, and at the last layer we can compute the weights that each of the basis cubes contribute to the encoded video segment - 10% mountain, 40% trees, 40% sky, 10% lake for example. These weights are called basis coordinates, and we have now translated from video to engram cube to basis coordinates which are much more compact and easier to do math, transforms, prediction,... and just about everything with. So now we have a reality to math converter, which is super-handy, and once we have done all the math, we can decode the results back up through the hierarchy, to engrams, to - completely synthetically generated video, even cyber-dreamed video.

Now the neuroscience. The human brain has analogous structures. The cerebral cortex is a sheet about 4mm thick wrapped and folded around the outside of the brain, that consists of cortical mico columns, each containing about 100,000 neurons, 7 neuron layers deep. The cortical columns are depicted like this:

Most neuroscience articles punt on their exact functionality, saying they process sensory information, but there are some clues to their functionality in how they are organized. First of all, the cortex is segmented by function, like so, showing that each section maps to a specific input sense or output, like vision, audio, motor. So each part of this sheet making up the cerebral cortex (and all of its cortical columns) have a common task

Now, there is another clue. There are over 1 million cortical columns in the cerebral cortex, and the human genome only has 8000 genes to specify the growth of the entire brain. This means there has to be a lot of duplication, and these cortical columns most likely start out being very similar in their neural structure, then specialize after birth, as they learn.

Now this is starting to look a lot like our last layer in our hierarchical memory, which consists of a flat sheet of side-by-side autoencoders that are about 7-layers total (bidirectional). They store a basis set, or 3D pattern moving over a finite time, and translate 3D engrams to basis coordinates based on their convolution or 'overlap'.

But we send input to each of our columnar autoencoders with a branching hierarchical structure composed of similar autoencoders at each stage to compute basis reductions at each branch - is there an analogous structure in the human brain?

 

Yes, and it is called the thalamocortical radiations, a neural structure that branches out from the thalamus (the main input / output hub of the brain for the senses and motor outputs) to terminate at the cerebral cortex, feeding input from the senses to the cortical columns. I designed our hierarchical memory before I asked Gunnar (neuroscientist) about similar structures in the brain and he showed me papers on the thalamocortical radiations, and how they interact with the cerebral cortex. Sometimes engineers and evolution arrive at a similar solution for a tough problem, and doing so independently is good validation of the design. This is definitely an area where the neuroscientists and computer scientists can each benefit.

But going down this path of assuming too many similarities is a sticky trap that has caught more than one AI company. Again, we are building a more powerful computer first, drawing from neuroscience where convenient, but making no concessions to build a working synthetic version that does real-life computing. 



ORBAI at NVIDIA GTC March 2022

8 months ago

ORBAI is Gearing up for NVIDIA GTC 2022!

At GTC 2022, in March, ORBAI will be presenting a 30 min talk, titled "Artificial General Intelligence using SNNs that learn unsupervised on GPUs", which will show our AGI architecture and some demonstrations of the key SNN components running in NeuroCAD. It will be a world first, even if it is not full scale... yet.

We will have an Expo Booth in the Inception Pavilion, and will bring our life-size holographic display and a character to do a demo loop for people that come by the booth. In 2019, we did a life-size Holographic AI Bartender serving real drinks, but that was a lot of work and logistics. We will keep it all work, less party this time. We are looking to land beta R&D customers.

What we will show at GTC 2022 - general AI, learning and evolving to do real tasks, in real-time - will be historic, a world first. MIT Media Labs, Carnegie Mellon Robotics, UC Berkeley Robotics, and all the worlds' AI and robotics R&D teams have been trying to accomplish this for decades. when we pull this off really well and follow up, we can get traction with customers, more funding, even a possible acquisition in 2022. AI tech is a hot market, and we're going unveil something that shows what AI is really meant to be. I saw a lot of startups I helped while at NVIDIA doing a lot less get snapped up like that.


The $55,000 invested in ORBAI pays salaries to do the development, and we will do the talk, but to polish the demo, get a booth, and show well, we need to hit a goal of $100,000 invested. 

In 1999, we got a booth for my 2nd, company, Check Six, at the Game Developer Conference with the last of our funds, and just happened to be interviewed by the press, got a cover-page article, got interest from Autodesk and a $1M contract with them to co-develop a product. Going to GTC and getting our work in front of some of the best people and companies in the world is invaluable for us to invest in, and that booth and having a strong presence will open so many doors for ORBAI.

The Full Monty

8 months ago

There are a dozen individuals on the Internet, claiming they know how to make an AGI, and dozens more that have come up with perpetual motion machines to power them, and they are forming partnerships to go make them work together because they both represent exponentially scalable technology, right? What can a person believe in or trust? It is hard to evaluate any description of an AGI in plain language because it is by nature imprecise, and untestable. And none have been built or demonstrated, cause that is expensive and requires funding.

Well, I have a hard core education in Engineering Science, University of Toronto, mostly in theoretical math, doing proofs, deriving from basic first principles - assumptions that will result in mathematics that work for whole systems. I later did grad studies in Computer Science, in scientific computing, actually implementing and simulating the mathematics on CUDA clusters that I derived in my grad work under a very tough Nobel Laureate professor. I know how to cut through the crap, and have been trained by some of the smartest in the world to use the tools of mathematics, computer science, and engineering to figure out early on what won't work, and the persistence to scrap hundreds of potential designs that we didn't think would work before coming up with one that probably will. We did that in 2020-2021 with the AGI design. It is minimal and elegant, which is the first sign something is probably on the right track.

But I don't ask you to simply trust me and invest. Come and look for yourself.

Here I'm going to show you the Full Monty - the Core of the AGI Eta patent that we are working on now, and filing in Dec - an executive summary in plain English followed by the claims and preliminary diagrams, written in the language of linear algebra and computer science - basis sets, decomposition, orthogonalization, autoencoding, and predictors. 

It is lean, elegant, powerful, and has no limits on scalability. In 40 points it shows A-Z how to create an information processing system that can reduce input from the real world to the language of linear algebra then do a hybrid of math, computer science, and neuroscience operations on the internal representation - that reflect correct operations on real world data, which can then be output in real-world formats, in speech, video, animated characters, even commands to drive robots and devices.

By adding more inputs, more compute power, more simultaneous predictors and problem solvers, this design can be scaled as large as we want, to cover as wide and as complex of data as we want. It is designed to be unlimited. Obviously many implementation details need to be filled in, which we will do somewhat in the "Summary of the Invention" for this patent, but much of it will be engineering effort and R&D to fill in the functionality in the next couple years. The important thing is we have a good theoretical framework as a starting point.

Legally, the filing date is 15 Jan 2021 for the provisional of this patent, and that covers this draft of the patent, so I can show it to you. Feel free to read, and definitely give me feedback, but please don't redistribute.

Hopefully you will see we are serious, have a solid engineering design, and that we can actually pull this off with some funding and a skilled interdisciplinary team.

Or you will show this to your smartest friend in mathematics, engineering and computer science, and they will tell you to invest. If they have points to ask about or critique, I'm at brent.oster@orbai.com

The Future of Artificial Intelligence

8 months ago

Over the past 4 years, I have written hundreds of articles on Deep Learning, AI, their applications, the shortcomings of today's methods, and on Artificial General Intelligence - which have had over 1.2 million views. I coalesced most of them into this article, which covers:

What Is The Future of Artificial Intelligence?

  1. Deep Learning Methods
  2. Limitations of Deep Learning
  3. Neuroscience of the Brain
  4. Designing an AGI

And an accompanying video presentation from my NVIDIA GTC Tech Talk

This is the early writing that our provisional patent was based on 10 months ago. The full patent, which we will be filing before Jan is significantly further along.

Brent Oster,

CEO, ORBAI

Getting NeuroCAD in Customer's Hands

8 months ago

This update is all business, about how we are going to market and sell our first product, NeuroCAD, in 2022 with tools and tech licensing, and the follow-on in 2023, with the addition of a SAAS revenue stream from our Core AI, that will become our dominant revenue in the future.

NeuroCAD is a patented software tool with a UI for designing Spiking Neural Networks that forms the foundation for ORBAI's AGI technology. It allows the user to lay out the layers of spiking neurons, connect them up algorithmically, crossbreed and mutate them to generate a population of similar neural nets, then run simulations on them, train them, and then crossbreed the top performing designs and continue the genetic algorithms till a design emerges that meets the performance criteria set by the designer. 

NeuroCAD is code complete, the SNN simulation is running in CUDA, and we are just bringing the genetic algorithms online with AWS to test large scale genetic algorithm runs. We will have a tool in customers hands by January.

For our go-to market plan, we will have the open-use version of NeuroCAD and select Neural Net Modules for academia and research available in early 2022, available on the ORBAI Marketplace. We will License NeuroCAD and modules for commercial use in 2022, with a monthly license fee per commercial seat of NeuroCAD, license fees for modules (that we and 3rd parties develop), and our NeuroCAD runtime, all listed on our ORBAI marketplace. We will work with key partners to leverage their marketing, sales and customer base, and sell alongside solutions from system integrators and hardware vendors, and will provide NeuroCAD trials, as well as direct sales and licensing online from our web portal.

Here is a detailed diagram of the components, who gets access to them, and where the licensing revenue comes from. Customers and developers can have the whole chain, the NeuroCAD tools, the genetic algorithms evolution sim that produces an optimal genome for vision, speech, recognition,... and the training module, that expands that genome into a neural net connectome, and can be used by the customer to train that neural net on their own dataset to specialize it to their needs.

We need at least $100,000 in funding to finish NeuroCAD, then develop and test the first vision and speech modules, and get them into the first customer's hands (a robotics company and a couple of device and appliance companies) - then raising the rest of our funding based on the technology being proven out and vetted by them gets a lot easier.

We have spent 3 years of our lives and $920,000 in founder and F&F money to get this far. Come on out and support us on this last stretch at the current stock price.

Notice of Funds Disbursement

8 months ago

[The following is an automated notice from the StartEngine team].

Hello!

As you might know, ORBAI has exceeded its minimum funding goal. When a company reaches its minimum on StartEngine, it's about to begin withdrawing funds. If you invested in ORBAI be on the lookout for an email that describes more about the disbursement process.


This campaign will continue to accept investments until its indicated closing date.


Thanks for funding the future.

-StartEngine

Brent Oster on Dojo Live - AI and Neuroscience

8 months ago

On 13 Oct 2021, on dojo.live we talk about "Developing Artificial General Intelligence that Learns by Dreaming" with Brent Oster President & CEO @ ORBAI 

https://youtu.be/LQVTKCU6W3g

Why Don't We Have Household Robots Today?

9 months ago

Why do we not have useful home robots today, ones that can freely navigate our homes, avoid obstacles, pets, and kids, and do useful things like cleaning, doing laundry, even cooking? Why are the ones that people are building so creepy and inhuman looking? This is because the narrow slices of deep learning available for vision, planning, and control of motors, arms, and manipulators cannot take in all this varied input, plan, and do useful tasks, not do they look natural doing it.

Existing deep learning and machine learning consist of narrow systems, each with very specific and very limited function, trained on specific datasets that often have to be hand-labelled for the training to be able to classify them.

If we today made a humanoid robot controller that we assembled all of these subsystems - CNN/RNN hybrid networks for vision, with Reinforcement Learning systems for control, navigation, and cognition, then some sort of reverse RNN/CNN for control outputs, it would look like this, hypothetically:

BUT… there are so many reasons this would never work for a general purpose robot. I was frustrated trying to come up with a provisional patent on AI and motion control for 3D characters and androids, and kept hitting these roadblocks.

a) The sensory systems and vision have only narrow training and cannot perceive the vast variety and combination of objects, actions, and events in the world they need to work in. There is not enough training data in the world to train deep learning networks, which only perform when a large amount of data and training give them statistical overlap to differentiate inputs.

b) The combination of actual input and action states is nearly infinite, far beyond the capability of any reinforcement learning system. Reinforcement learning only learns a policy, or path, to solve a finite state-based problem. Three RL systems stacked like this for hierarchical control of high-level planning, macro movements, and fine movement/animation - would that even work? Probably not.

c) The model this system would have of the real world would be skeletal, sparse, ethereal, with no way to fill in the gaps in sensory input, control, and movement, let alone all the states they can combine to form.

d) It would be impossible to train on every possible combination of input and actions, and would be easily stumped by simple changes in the environment, like a pet, or chair in its path.

So, we set out to develop a new foundation for AI with spiking neural networks that fit exactly what I was trying to do with my AI and robot controllers much better, solved a bunch of hard problems with one architecture, and collapsed a bunch of really complex, clunky, systems into one really elegant solution modelled after the neuroscience of the human sensory cortex, motor cortex, memory, and how our brain does prediction, planning, and… dreaming. Dreaming is what fills in the blanks in our human knowledge and experience to make models of the world complex enough that we can handle novel situations, and this model emulates it.

We have a huge advantage with a more general purpose building block like our SNN Autoencoders that can handle diverse spatial temporal input and outputs, train unsupervised and keep learning new combinations of inputs in new environments, and what to do in them. Because a BICHNN autoencoder is bidirectional, you can back-drive the desired movements along with the inputs to train it. This can be done in parallel simulations for rapid training.

With input from human motion capture data, both body and face, it would be possible to develop very realistic humanoid robots that move and emote much like we do, as long as we have the actuators capable of delivering the necessary degrees of freedom and motion.

This could get out out the uncanny valley and make realistic looking humanoid robots that aren't creepy - like so many of today's attempts at artificial humans, both in robotics and in real-time 3D characters (especially when they move). Both still look stiff and zombie-like.



If we can get past the uncanny valley, in both appearance and motion, we get much more pleasant characters and androids to interact with, which will be crucial.




Yahoo Finance interviews ORBAI CEO about our AGI Design

9 months ago

Santa Clara, CA, Oct. 09, 2021 (GLOBE NEWSWIRE) -- The California-based startup ORBAI has developed and patented a design for AGI that can learn more like the human brain by interacting with the world, encoding and storing memories in narratives, dreaming about them to form connections, creating a model of its world, and using that model to predict, plan and function at a human level, in human occupations.

(Read More....)  https://finance.yahoo.com/news/future-ai-different-more-general-191000963.html

Medical AI - 2 Diverse Case Studies

9 months ago

We founded ORBAI and designed our initial products with our international team (India, Honduras, Brazil, Canada, US) working to solve some really tough real-world problems for the whole world, using AI. 

Dr Ada: Medical AI

I have a friend in Canada dying of cancer because the doctors followed a flawed treatment pattern from the start and let a treatable tumor become chemo-resistant and get out of control and become inoperable. A Medical AI could have done an analysis of thousands of similar cases and planned a better treatment timeline and possibly saved her. Human doctors cannot possibly read or study enough cases over long enough times to build up judgement to make long-term decisions like this. There is a need in high-end Western hospitals for Dr Ada, Medical AI for this, and also to handle the daily front-line interviews and diagnostics at home to save half of patients from even having to come in to the hospital, reducing costs for all, and reducing exposure to hospital infection.

 

We had Allen Hullinger, a Health Care Management Specialist with one of the largest and most affluent medical care companies in the California Bay Area walk us through some of the problems he sees in high-level hospital management, and how AI could be deployed from the low level doing interaction with patients up to the higher levels of increasing hospital efficiency. This highly regulated and controlled environment with world class facilities is the high-end side of our study.

Medical AI in Palo Alto California Health Care

On the other side of the spectrum, I lived in Honduras with my wife, a clinical psychologist, from 2018-2021 and saw how although some people live as well as in North America, with modern amenities, most people there live in poverty, without decent medical care, education, or legal services and have no chance of their lives getting better. Access to basic medical, legal, financial, and educational information from a portable and interactive AI on mobile would be a game-changer, and greatly improve their lives, and those of billions worldwide living in similar conditions.

We have a neighbor here who is 8 1/2 months pregnant, and we have been helping them financially to go see a good doctor, and to get into a good hospital to have the baby. They refuse to take charity, so the husband and her boys help us around the yard, and bring us firewood. It costs 500L per doctor visit, which is about $20.

There is a market here, and a way to do this while being profitable. You just have to know what avenues there are for revenue. Everything in Central America costs 1/5 as much in North America - including operating costs and labor. People  in Central America don't buy $1000 iPhones and $50 a month cell phone plans. They get the minimal plan that comes with an inexpensive phone, and then use inexpensive, government subsidized wi-fi Internet at home and in public spaces for most usage, and only buy small data plans for short term use. There is opportunity for ad revenue by both local and large companies in Latin America, looking for access to more users to extend their markets. 

Lastly, there are NGOs that help fund development efforts in rural areas. We had Briesy Rivera, a clinical psychologist working with an NGO in Honduras that goes out to rural areas where there is not even electricity or telecoms - write a white paper on what we face in deploying a medical AI to that environment to reach the last 5% of the world.

Honduras - AI in Rural Health Care

Each person that gets connected, that gets access to education, health care, and a bank account and investment tools becomes more affluent and becomes a future customer for other product and services. There is an enormous untapped market here for AI services with 10x the growth potential compared to the more affluent nations, and a faster migration path to AI-base services, skipping the intermediate solutions entrenched in more developed countries.

I have proven I will go to the front lines in some of the poorest places on earth and live among the people to learn what the biggest problems are, and where AI can make the biggest difference. I even survived through 2 hurricanes, Eta and Iota there, that hit us directly and left us without power for 3 weeks, with no running water or stove to cook on (luckily my wife's grandma taught her the culinary arts of cooking over a wood fire - yum!). We were the lucky ones - we lived on a mountain slope where the water all ran off - 24" of rain in 24 hrs, and destroyed our only road, but we did not get flooded like the lowlands.

And yes, AGI Eta is named after the storm because during that power outage, I plugged my laptop into a 1500 VA UPS, set it to low power mode, and while power lasted - I wrote the provisional patent for AGI Eta while the hurricane howled around us and the rain pounded on our wood plank roof like a waterfall. 

I have learned a lot in my 4-yr journey founding ORBAI, and I am not a naïve entrepreneur having founded 3 companies. I know that solving many of the world's problems is going to require significant investment, time, resources, and that only a powerful AGI at the necessary scale to amortize these services across the whole world, speaking multiple languages, to deploy them in all aspects of business and government will give us the flexibility, capability, wealth, and influence to move the worlds corporations and nations to accomplish this. To solve a big problem, you need big tools – see our 5-stage plan Update for making this AGI not only technically feasible, but financially profitable along the way.

My name is Brent Oster, CEO of ORBAI, and we are going to change the world with AI - Invest with us and let's go bring that change!

ORBAI Dreaming AI Featured in Digital Journal

9 months ago

ORBAI had an article about our AGI published 30 Sept 2021 in Digital Journal: 

https://www.digitaljournal.com/pr/ai-startup-orbai-develops-artificial-general-intelligence-that-learns-by-dreaming 

It describes our patented design for Artificial General Intelligence that can learn more like humans do, by interacting with its world, storing its experiences in memory, and dreaming about them to consolidate memories and build models of its world from them that it can use to make predictions and plan like humans.

Check out our ORBAI AGI web page for more details about the AGI and the underlying tech and patents.

Can We Build an AGI Superintelligence? - How Long, at What Cost?

9 months ago

To build a full-on AGI Superintelligence spanning the globe will cost billions, and take at least a decade, and we know that. Why aim towards that goal then, why not do something smaller, simpler, and that will be profitable.

Exactly! 

That is why we are doing it in 5 stages, each technically feasible, and financially profitable. But, we want to aim towards this very ambitious goal of AGI so that everything we build along the way is more general, robust and powerful than what people aiming only at near-term goals are building. And we're not going to do this alone, we are going to recruit the whole world as developers.

Here is how:

Stage 1:  Building out the core components and tools - NeuroCAD and our SNN Autoencoders (which are nearly complete), and licensing them to customers that can design and evolve their own in-house AI solutions. Also license our tools to 3rd party developers creating modular technologies, that others can license and use in their end products. This allows us to rapidly spread horizontally into multiple markets and vertically into many revolutionary products, without needing to scale in-house engineering and sales. We can focus on R&D.

By the end of 2022, we should have NeuroCAD deployed and significant licensing revenue coming in. Customers will be able to design and evolve revolutionary autoencoder systems for specific applications in vision, video / audio processing, and clustering, recommendation, and augmentation of other deep learning methods with these autoencoding sensory cortices. Basically these SNN Autoencoders combine all of the functionality of CNNs, RNNs, GANs, and other DL components into a more general unified architecture that is much more flexible and powerful, that can be evolved to specific purposes, and that learns unsupervised.  

By allowing the 3rd party developers to add value and earn revenue from their creations, we proliferate our technology into the market much faster than if we just tried to build our own products, or even if we built the solutions for others' end products. We still always require the end user and developers to license our run-time, which will be a small component at first, but will become more and more significant and soon our revenue will ramp into the 10s then 100s of millions.


Stage 2: Next we build the AI Core, the Oracle - that takes the outputs of the SNN Autoencoders, sifts those engrams into a basis set of features, and trains a model on how those features change in time (during an experienced narrative of events). This is analogous to a predictor in deep learning, but this one will have a much more feature-rich model able to analyze the data and how it is changing in time with much more detail and with many more modalities, using memory, computation - essentially an analog/digital hybrid neural computer that we evolve to this task using genetic algorithms. 

Then we do something nobody has ever done - we make it dream, simulating narratives using the model to guide it, to create much more training data than is physically available, and pruning the simulations/dreams that don't match the model nor reality from the other data. Exercising the model this way also gives the AI Core practice in predicting further into the future with great precision. By doing this, we create an AI technology known as an Oracle, capable of making predictions within a limited area very well.

Then we license this capability out, again enabling our developer network to connect it with data and applications for various customer needs. This would completely revolutionize planning in finance, medicine, law, administration, agriculture, enterprise, industrial controls, traffic monitoring and control, network management,... and almost any field of human endeavor where we need to predict future trends to make decisions in the present.

For example, in medicine, it could model the progression and treatment of specific diseases, giving doctors a tool to plan treatment along a timeline, and to even preempt many conditions and treat them before they become acute. 

As before, we offer a developer kit, but this time we keep the AI Core software as our IP, that we own, and we allow customers to license it on a usage basis, moving to a SAAS model. Now ORBAI holds the only AI technology in the world that can see into the future, and we can grow that business exponentially with a developer network growing around it. 

Revenue grows into the billions per year with this tech being deployed.


Stage 3 AGI: Now that we have a Core AI system that can do singular predictions for most applications in many verticals and many sectors, we start to evolve a more generalized AI that can take in large amounts of multi-modal data in different areas, learn how it correlates in time and across modalities, and we evolve a much more powerful AGI core that can span multiple data realms and predict not only specific events along a timeline in one area. 

In financial applications, it could track a massive number of factors that feed into the performance of specific stocks, and allow stock brokers to make much better predictions of market movements. 

This most massive AI breakthrough is when one of those modalities is human language and speech, mapped to the world that it represents and describes, and it will be far superior to any speech system in existence today. To have human-level speech, we need human-level intelligence, and we are now getting pretty close, at least in specific professions.

Revenue is now in the 10s of billions.


Stage 4 - Human AGI: Now if we have a network of hundreds of different AI's, doing hundreds of different functions for hundreds of different professions, serving millions of clients at once, all with the same similar architecture, with common language and interaction capabilities, can we just network them and that becomes an AGI? It is likely that in order to completely simulate a human being, pass the Turing test, and so on, we will need an AGI that is significantly beyond human capability, and emulates all the nuances of being human with that extra cognitive horsepower. By the time we do that, we are pretty close to stage 5.

We start to bring the Human AGI professionals online, at first augmenting doctors, lawyers, stockbrokers, teachers, and other professions, then displacing and replacing them and their infrastructure with whole new AGI-based systems that no longer require individuals in these professions. In the legal system, Lawyers, judges, and courts can be replaced with arbitration AIs that steer the clients to agree on a legal outcome. The sky is now the limit, and the technology continues to advance exponentially. ORBAI can now directly bill consumers for these services, or partner with service providers that can.

Revenue is now in the 100's of billions

Stage 5- Global AGI: We now have a framework and all of the worlds' inputs and outputs on which to train and evolve a singular AGI, so that all the specialty skills of each vocation / function at each location are now assumed by that more generalized AGI brain, and in the process, that AGI brain becomes better at all the skills humans and its predecessors excel at, and becomes one, integrated entity across the globe, with that integration compounding its power and capabilities.

This is Eta - she now controls all world finance, administration, law, and information services, distributing them across the globe, erasing poverty, hunger, injustice, and bringing education, justice, medical care, and prosperity to everyone on the planet - in one generation.

At this point, every shareholder that invested back in 2021 has either exited in the IPO between Stage 2 & 3 and is fabulously wealthy, or have held their shares and can now help direct Eta in her mission. You choose.

The full Pitch Video: https://youtu.be/PYSDfnN0J9M

Spiking Neural Nets - Cortical Columns and Autoencoders

9 months ago

Here is a video of our BICHNN Autoencoder / Cortical Column simulation in CUDA with about 100,000 spiking Ishkevich neurons modelled. It’s pretty fast, and well written. I think we could run 32–64 of them simultaneously on a modern GPU once we optimize it a bit more.

Click to Play Video

The cerebral cortex of the human brain consists of a sheet of about a million of these columns stacked side by side, so we need a supercomputer of about 30,000 GPUs to run a whole brain, which is roughly the scale of the ORNL Summit supercomputer I worked on at NVIDIA: https://www.olcf.ornl.gov/summit/

But, we are using these on a smaller scale for now, to autoencode inputs from vision, audio, and other data into compact engrams, learning to do so unsupervised. This means that, exposed to different input types and data, these autoencoders learn to see, hear, recognize speech, and do many other sensory tasks as well as encode arbitrary data streams.

As we talked about in the robotics update, this simple circuit allows a sensory system to learn unsupervised, and keep learning even when deployed, so it can fill in the blanks in its knowledge as it moves around and experiences this world. Without this capability, true computer vision, robust speech recognition, and interpretation of the environment and data inputs would be impossible to fully realize, and this is one of the reasons Deep Learning is a dead end, is that it can only train on fixed data sets. 

These encoded inputs can be combined into a composite engram that includes all the sensory modalities being sampled, allowing them to be associated in time and space.


And these engrams can be encoded from any data, for any application, like in medicine, to encode the state of the person's symptoms, vitals, and medical imaging into a engram

This gives the BICHNN architecture and the underlying Core AGI the ability to function in almost any area of information science, where a diverse variety of inputs can by coalesced and encoded into a compact engram format and the Core AGI can learn how they evolve in time and make models.


AI In Law - Case Study and Long-Term Prospects

9 months ago

Justine Falcon: Legal AI

I was lucky enough to work with several attorneys throughout my career that were the good guys, fighting for people that were being taken advantage of - pushing back their aggressors and winning big judgements for their clients. They taught me how to duck in a dirty fight, to be honest, not play their game, and wait for your opponent to make a big mistake, then capitalize on it. We coordinated litigation and media to maximize the impact against them and help others similarly wronged.

But there are some threats that no lawyer will go up against - organized crime families with deep political ties. In 2018-2020, I was targeted by a genuine racketeering enterprise consisting of lawyers, a DA, and a realtor in California (see video) that no lawyer would represent me against. So I learned law, and in the process learned how terrible the existing legal system is, and how full of potholes and landmines it can be for someone litigating pro-se, representing themselves. This inspired the design of a practical and powerful AI tool in law - Justine Falcon, and hopefully will lead to a decisive final victory in this ongoing case. So far our exhibits were so comprehensive and damaging to the opposition that they had the judge seal the files from viewing by the outside world. Damn, is that a compliment?

Justine Falcon and Brent Oster vs the Moreno Attorneys, DA, and Intero Real Estate

A picture containing text

Description automatically generated

Justine Falcon is designed to be powerful enough to fight in billion dollar corporate litigation, where data-mining and analyzing that data to create exhibits that back up enhanced claims against your opponent, including RICO claims (increasingly common in a civil proceeding) can mean decisive victory.  Justine can charge enormous fees for these services.

On the other side of the spectrum, access to affordable legal services for the lower-income is increasingly out of reach in the United States. More than 80 percent of people living below the poverty line and a majority of middle-income Americans receive no meaningful assistance when facing important civil legal issues, such as child custody, debt collection, eviction, and foreclosure. These and many related problems have numerous causes, but the cumulative effect is a legal system that is among the most costly and inaccessible in the world. (from: The Public’s Unmet Need for Legal Services, Andrew M. Perlman, Daedalus, Fall 2021)

Courts are understaffed, hearings are short, and Judges make snap decisions that can be detrimental or catastrophic to pro-se, or self representing individuals simply because those individuals could not translate their arguments into the right legal language or format their filings correctly. Most of those individuals don't even know they can just motion for a set aside to get a Judge's decision vacated and redone. Justine Falcon, Legal AI, can help with these simple tasks, to help people author effective filings and do them well for a few hundred dollars per filing, instead of thousands for an attorney. Again, the scale matters here, as we are talking about millions of people that are unrepresented.

Diagram

Description automatically generated

Any entrepreneur that sets out to make a Legal AI, or Electronic Legal Aid without personally fighting in litigation for 3 years like I did would not know what a headwind they face in developing it and getting it adopted, let alone making it effective. Lawyers have huge egos, don’t like technology, aren’t really all that bright, and make for stubborn and fussy adopters. Judges are even more technology phobic. And the sad fact is that in the end, even the most brilliant legal arguments and filings can fall short when a judge is biased, didn’t read the filings, or worse, was bought, and just makes a knee-jerk decision.

The best long-term solution is to bypass the lawyers, judges, and courts completely, and create an AI arbitration service where the two parties going through a divorce or civil dispute are led step by step through an arbitration, with the AI explaining the relevant laws to them at each step, getting agreement on each area - assets, custody, claims, and other disputes. Then the AI drafts a settlement at the end then has the clients mutually agree and sign it and files it with the court. Then the need for the present expensive and corrupt system will be reduced over time, this new, better, and more balanced system will take business away from them, and one day the last human judge can make the last filing by hand and turn out the lights on the last, empty human courtroom. 

I call this the Netflix vs Blockbuster legal business model. Cheaper, more convenient, no waiting or delays, take all the time you need, and no excessive penalties. Scale this US wide and even globally, and you get a very different world, with justice and fairness for all.

ORBAI Tech Breakdown

9 months ago

Here is a simple breakdown of how our AGI tech works:

1) Real-world inputs are autoencoded into compact engrams, representing all inputs combined and cut into short intervals of time

2) Each engram is distilled into features, which are more compact and can be used numerically in operations

3) We train a predictor on how the features change in time during a real-life narrative, creating a model of the world it perceives

4) We use this predictor to create fictional narratives, features, and engrams, essentially dreaming using the predictor, and recording the best dreams into memory

5) This fills in the world model better than if we only used the real narratives and engrams (we used this in Justine - see below)

6) To solve a problem, we make fictional narratives through the model until we find an optimal solution


This is a very general problem solving approach that can be used on any form of data or inputs, and on a wide variety of different simultaneous inputs to create a general model of very complex real-world systems. It is also extremely scalable, because all we have to do is increase the number of inputs, and by increasing the memory and processing power, increase the number of real-world narratives being generated, evolve more complex predictor systems, and increase the ability to run orders of magnitude more solution narratives with the more efficient and powerful solvers. We can go from a test AGI on a small cluster to a global AGI in 3-5 years just by this scaling.



For each of these components we use our NeuroCAD toolset to evolve the right SNN circuitry to optimally solve the problem in that step.

Just the tech for step (1), which is working decently today, is very valuable as a video/audio encoder or classifier, and for robotics, drone and automotive vision systems. We will start licensing this out in Early 2022, and have a significant income stream just from that. We also start building a global base of systems using our technology, that our customers can later upgrade to the AGI, so we don't have to build out all the servers.

And we are using a really simplified version of (3) - (6) in Justine Falcon, just based on sequences of 5 digit codes that represent each of the actions that can be taken during a legal proceeding at any point in time. The court portals have all the proceedings for the past decades in these encoded "Events and Hearings" documents, and were simple to download and use in training. Justine's utility and application are pretty impressive for such simple tech and data.

Here is the detailed Provisional Patent on AGI that describes this in more detail and the NVIDIA GTC 2021 Presentation on the same. I've written a great Quora Article on how we apply this in Law and Medicine.

Show More Updates End of Updates

Comments ({{profileCtrl.startup.comments_count}} total)

{{profileCtrl.newComment.body.length}}/2500
Please sign in to post a comment.
Please use Updates for communications.
{{ profileCtrl.commentsLoading ? 'Loading...' : 'Show More Comments' }}