Magic Industries has several long term projects underway, aimed at the eventual existence of ubiquitous AR//VR/MR and the Spatial Web.
Drop us a line if you’d like to collaborate or find out more.
A bio-cognitive human interaction engine for VR/AR/MR that allows users to manipulate 3D objects in virtual space using only the movements you use in real life.
The ancient Greek word Telos means the ultimate aim – the intended end point of any process. The purpose of TelOS is to remove all the complicated thinking and procedure around how things are done, so you can just get on with actually doing it.
Utilizing full hand and finger tracking, gaze detection, voice recognition and haptics, TelOS provides users new ways of interacting with data in a physical way – all without learning any specialised gestures, mastering game controllers or remembering menus and keyboard shortcuts.
- Spread documents out in front of you
- Push a group of images into the background
- Discard an item with the flick of a wrist
- Remember where that reference document is by the pile you left it in on your digital desk
- Throw a photo to the center of the room to share it with others
- Sign that contract by actually writing on it
- Document Management
- Data and Knowledge Work
Built to support all major VR/AR/MR headsets, TelOS can be dropped into your application to intuitively manage all interaction for Enterprise VR/AR/MR applications in the future.
A collaborative enterprise working space for VR/AR/MR.
Imagine if you had a special purpose built office, conference room or workshop for each project that you work on, each client you service, each patient that you see. In that room could be whiteboards and charts, documents and images, 3D models and plans. You could invite colleagues and clients to join the BrainSpace to communicate, create and collaborate, and they can interact even if they don’t have VR/AR/MR hardware.
Need to start a new project? Click – Create New Brainspace, just like you would open a new Word document. Set up the room, invite collaborators, get to work. Need to leave? Each BrainSpace remembers where you left things, what you had open and which applications you were running, so you can come back at any time and everything is just the way you left it.
Built on TelOS, BrainSpace facilitates powerful enterprise computing in 3D space. Combining elements of a social space, natural human movement and voice with enterprise functionality and integration with traditional computing platforms, BrainSpace is a comprehensive VR/AR/MR enterprise tool.
- Create a new BrainSpace just like you would open a Google Doc
- Invite your team to collaborate in VR, voice or video
- Install a whiteboard or other collaboration tools from the library
- Put up Agile boards
- Manipulate 3D models, (engineering, architectural, design, medical)
- Import existing file types (doc, pdf, xls etc)
- Table a document for the whole meeting room to see.
- Email a model back to a desktop or mobile
- Create multiple BrainSpaces for different teams and projects
- Select which items in a BrainSpace will appear to each user
- Encrypted Traffic between users
- Encrypted Storage for your entire BrainSpace
- Enterprise persistent data rooms
- Remote conferencing
- Medical data collaboration.
- Collaborate on and annotate 3D Models
- Real Time Collaborative Data Analysis
- Remote Demonstrations
BrainSpace operates across multiple platforms, including Desktop, Mobile and Console VR, Hololens and Meta2 supporting as much functionality as possible on each device.
Even desktop, tablet and mobile users can join a BrainSpace, and while these users won’t be able to reach out and touch your data, they can still watch the proceedings, join the conversation and interact with content.
In the near future of ubiquitous VR/AR/MR, we believe the use of “apps” will die away, replaced by new operating systems, and “browsers” for the Spatial Web. There will be so much content available in worlds both digital and real that you’ll need help just filtering it.
- Persistent across all experiences
- Contact lists and Social Media Integration
- Voice controlled
- Backed by AI and Machine Learning
- Take snapshots in any experience and share them
- Invite a friend into your current experience
- Stream your viewpoint to a friend
- Bookmark your location (in the real or the virtual)
- Real World Object Recognition
- Uses Spatial Web Infrastructure
- Avatar customization and persistence
Developers can drop the ORBY plugin into their projects, and even use hooks in ORBY to rig their in-experience tutorials and other benefits. Your personal interactions with ORBY will make it smarter over time, and anonymized data from all ORBY users is fed to a Machine Learning model to improve ORBY’s contextual understanding.
In the real world, ORBY will filter available content, alert you to new content in an area, and assist your interactions with data and digital content.
The Spatial Web will need a lot of infrastructure to work – location mapping, object recognition, various forms of ML and AI and more. The Reality Forge platform aims to create or integrate this infrastructure, and make it easy for content providers, developers and publishers to access the various features of the Spatial Web.
A network of microservices that can be organized into pipelines, allowing content providers to access automated processing like photogrammetry, decimation, texture compression, image or video adjustment and more.
The central hub of it all – pipelines will add your processed content to your private library, or you can upload directly. LookDev and Asset Management tools make it easy to share with teams or clients securely and all other services reference your Content Library.
Select your content, choose an existing or custom app template, choose the infrastructure features you want and the platforms you want to publish to. The app builder will go to work and notify you of download links for the final distributables. App Builder will also publish to app stores.
Infrastructure Services & Plugins
Skype, WebRTC, Spatial Web positioning, Object Recognition, Computer Vision, Machine Learning Analytics, Multiplayer – all these and more can be dropped into your application during build, with development versions of plugins available for download.
You can use the content pipelines to return automatically processed content back to your own tools, upload your content and use the app builder, just use the content library for LookDev or use the entire platform to do everything. It’s your choice how you use each part, with no requirements to remain within the infrastructure. Eventually even our infrastructure services will be replaceable with your own choices, so if you want to use a private ML API or your own custom engines, that’s all good.