Origin PC has come out with a new computer I have my eye on. I am
waiting to get back into streaming on my day off from my main job, so
looking at this computer that has a PC, Xbox, PS4 and a Nintendo Switch
in it with a capture card looks like it will help.
Origin PC is a computer building company based in Florida. They have been in operation since 2009, and for there 10th anniversary they wanted to do something extraordinary, so they made the best gaming rig out there. The main business for them is making computer customized to what you are looking for, from gaming, workstation, and even laptops.
They are the main options for computer from small PC’s that you can quickly move, to more prominent larger towers at are water-cooled and the highest speeds. But the central part this PC, the Big O, what I believe is there most expensive PC yet. With a full Xbox with 2TB of SSD memory, custom CRYOGENIC water cooling loop in Xbox Green.
A full PS4 as well that is has a 2TB of memory on an SSD and blue
CRYOGENIC cooling for it as well, full custom water cooled pc. It has a
Nintendo switch Dock on it so you can play and charge your switch on the
pc. For the PC it has an Nvidia Titan RTX in it for graphics, 14TB
memory, 64GB Corsair Dominator platinum RGB RAM sticks. With full
custom RGB lighting in the pc and thoroughly water-cooled with an Intel
Core i9-9900K CPU in it for running it.
On the back, there are ports for all systems so you can plug and go plug
it up to a monitor with some channels and you can switch channels on
the monitor to switch modes if you want. I hope that they bring this to
the public, but I don’t think they will do that. So far from what I have
found, there is only one they have made for Unbox Therapy, I will post
the link to the video at the bottom of the article.
But even if you don’t want to get the Big O bc if it does come out it
will be an expense but want something a little cheaper. Then you should
try to build a computer with them, and it is not to hard to do. You can
get in full detail from the color to picking out your GPU. You can also
get peripherals if you want from them like your keyboard and monitors if
you wish to.
We’ve just published a comprehensive expert guide (PDF) on advanced techniques to create high-quality light fixtures for real-time applications. Read it and find out how you can use light cookies and advanced shaders to create convincing artificial light sources in any project, from games or architectural visualizations to films and more!
Wait a minute, what are light cookies? These are 2D textures or cubemaps used to block parts of a light source in order to control the shape, the intensity, and the color of the emitted lighting. They can also be called “gobos”, “cucoloris” or “flags”, depending on the industry and their use case.
With their help, you can efficiently simulate ray-traced soft shadows, colored transmission, and even refractive caustics! Indeed, rendering these effects fully in real-time would be remarkably expensive in densely-lit environments, even for the most powerful GPUs on the market. This is why baked light cookies are still crucial to produce convincing lighting in real-time scenarios.
Good news: the Built-In Render Pipeline can also take advantage of (grayscale) light cookies! Therefore, you can also reproduce high-quality shadows on platforms incompatible with the High Definition Render Pipeline (HDRP), in real-time.
After pointing out some of the common lighting mistakes still found in CGI nowadays and giving you recommendations on how to prevent them, the expert guide walks you through all the steps required to generate beautiful noise-free cookies with a variety of 2D and 3D programs, such as Photoshop, 3ds Max and Unity itself.
Moreover, I will explain how to set up critical post-processing settings in Unity, such as Exposure and Tone Mapping, so that your interior scenes can be lit in a more physically-correct way, one of HDRP’s mottos.
Then, the guide gives an extensive review of the crucial Light properties in HDRP, such as the physical Intensity units, the Color modes, the Shadows parameters, and the Light Layers used to restrict lighting to specific objects. Later, I present different methods to replicate the lampshade of a chandelier with HDRP’s highly flexible Lit shader.
Finally, I introduce an original workflow to generate appealing caustics to bring the final ultra-realistic touch to your light sources, by adding micro-details to simulate the self-reflections of the light fitting and the structural imperfections found in the reflectors and the lampshades.
Hopefully, thanks to our new expert guide, you will have many tools on hand to create convincing light sources, and raise the visual bar of your Unity Scenes!
Beyond sales and marketing applications, AR has innovative use cases for design-collaboration, training, and maintenance. We are excited to share how Visionaries 777, a design and engineering AR/VR agency, used Unity and Vuforia to deliver a compelling AR auto experience that used physical models and real-world conditions to bring AR overlays to global automotive shows. This post will discuss how they built their experiences and why it was a hit.
Vurforia recently released a new feature called Model Targets with deep neural networks (DNNs). Model Targets, unlike image targets and QR-Codes, enable developers to place an AR experience on a physical object by using a 3D CAD model. Recognizing objects by shape is a powerful technology for the industrial enterprise market, including automotive companies looking to build more robust AR solutions.
Visionaries 777 put Vuforia Model Targets and Unity to the test with W Motors Lykan Hypersport. Using Unity, and models generated in Vuforia, they created an application to change the outer colors and details of the vehicle. This application was used at car shows around the world to allow people to customize the Lykan with their choice of colors and decals.
The ability to constantly and accurately track the car allows seamless visual effects to be created. The app offers a scenario in which the viewer’s imagination comes to life when creating their dream vehicle. The physical car is flawlessly layered with the new digitized car paint, reflecting the physical environment and surrounding light. The app further enables the user to digitally apply vinyl all around the body represented as racing stripes, graphics, and logos.
Get more details by checking out our Unite session to learn how Visionaries 777 used Unity and Vuforia to build their AR experience.
“Vuforia Model Targets is a game changer that many developers have been eagerly waiting for quite some time. Combining the Unity platform with Vuforia Engine has enabled us to take AR beyond a gimmick and create compelling solutions for the enterprise world.”
Frantz Lasorne, Co-Founder of Visionaries 777 Ltd.
Vuforia Engine 8.0, released this January, expands the possibilities for Model Targets. This newest version provides the option to train Model Targets using deep neural networks (DNNs) for instant, automatic object recognition. Once the Model Target of an object or objects has been trained with multiple viewpoints, the information is used to not only identify what object you have in front of you, but what angle you are viewing it from. The application will then provide a guideline that most closely matches with the user’s angle to snap to the AR experience.
In an automotive setting, a user would need to simply start the application, point the device camera at a car, and Vuforia Engine would recognize the vehicle and provide a guide view matching the angle they’re seeing and then snap to the object. From automobile service instructions to luxury car marketing details, this technology offers new, dynamic ways to use AR.
With Model Targets trained using DNNs, you’re now able to use Unity to create more realistic AR experiences. Plus, by building AR auto applications using the Vuforia Engine and Unity, you can build once and deploy across multiple platforms, letting you reach the widest possible audience in less time.
Learn more about our solutions for the automotive and transportation industry here. And learn more about the newest version of Model Targets or Vuforia Engine, at our Vuforia Partner page.
2019.1 marks the start of the newest TECH stream, with lots of new features and functionalities. This includes more control over the editor and improvements to both your potential iteration speed when developing for Android and your workflows in general. Read on to get more details on what’s available to try out in the new beta today!
Incremental Garbage Collection (Experimental)
In Unity 2019.1 we’re introducing the Incremental Garbage Collector as an experimental alternative to the existing garbage collector for projects targeting the Universal Windows Platform. The Incremental Garbage Collector is able to split its work into multiple slices. Instead of having a single long interruption of your program’s execution to allow the GC to do its work, you can have multiple, much shorter interruptions. While this will not make the GC faster overall, it can significantly reduce the problem of GC spikes breaking the smoothness of animations in your project by distributing the workload over multiple frames. To learn more read our blog post here.
With the Shortcut Manager, we’re introducing an interactive, visual interface and a set of APIs to make it easier for you to manage editor hotkeys, assign them to different contexts and visualize existing bindings. To address the issue of binding conflicts, the interface can also visualize whether multiple commands use the same binding and let you remap accordingly.
More GPU Lightmapper Functionality (Preview)
2019.1 brings additional functionality and platform support to the GPU Lightmapper (preview). It’s now enabled on macOS and Linux and supports double-sided GI flags on materials as well as shadow casting and receiving on meshes.
Baking now uses the same high-performance GPU as the Editor. You can still change this to a different GPU using the command line. Head over to Documentation for more info.
Use Unity’s SceneVis controls to quickly hide and show objects in the Scene View, without changing the object’s in-game visibility. As a scene becomes more detailed, it often helps to temporarily hide or Isolate specific objects, allowing you to view and edit without obstructions. SceneVis enables this functionality via hierarchy tools and keyboard shortcuts, plus a toolbar toggle to quickly enable or disable the effects.
It’s now possible to manipulate particle data using the C# Job System, with no copying of particle data between script and native code. In addition to that, we have also added some improvements to mesh particles, giving you greater control over which meshes are assigned to which particles.
Android SDK and NDK installed with Unity Hub
The Hub now provides the option to install all the required components for Android as part of the “Android Build Support” option, so you’re sure to get the correct dependencies and don’t have to gather and install anything else. If you’re an advanced Android user, you can still install and configure components manually and use Android Studio. Also, note that starting with 2018.3, “Android Build Support” comes with its own Java Runtime based on OpenJDK.
Android Logcat integration (Package)
Android Logcat Package is a utility for displaying log messages coming from Android devices in the Unity Editor, making it easier to debug by controlling and filtering messages right in Unity.
Faster iteration with Scripts Only patching on Android
To perform faster iterations during development, the Unity editor offers the Scripts Only Build option, which skips many steps in the build process and recompiles only the scripts, then builds the final package and deploys after selecting “Build And Run”. We have extended this feature. Now it allows you to patch the app package (APK, Android only) on target devices instead of rebuilding and redeploying it, so when you’re iterating on your C# code, only re-compiled libraries are sent to the device. Note that a complete build of the project must be available before Unity can execute a “Scripts Only Build”.
Editor Console Improvements
The editor console has been updated with clickable stack trace links that will take you to the source code line for any function calls listed in the stack, and textual search to filter down your console entries.
Timeline Signals are an easy way for Timeline to interact with objects in the scene. Using a signal emitter and a signal asset, you can trigger a signal receiver in a game object that will define a set of pre-configured reactions to your Timeline
Signal Emitters can be created on the new Marker area, any type of track and new signal tracks. They are fully customizable; go wild and create your own!
Then use Signal Receiver components to trigger pre-defined contextual reactions on your game objects.
Other changes and improvements
The 2019.1 beta also includes support for Video H.265 transcode, Nvidia’s OptiX AI Denoiser, OpenGL ES 3.2 and multiple importance sampling of environments with the CPU lightmapper among several other features and improvements. You’ll find a complete list of all the new features, improvements and bug fixes included in the release in our release notes section, and a collection of preliminary documentation for some of these features in this forum thread. Please note that the minimally required versions of macOS and Ubuntu that support Unity 2019.1 and projects made with it were raised to macOS 10.12 and Ubuntu 16.04.
If you’re looking to upgrade an existing project to 2019.1, please have a look at our Upgrade Guide.
How to get early access to the new features
You can get access to everything mentioned above right now simply by downloading our open beta. Not only will you get access to all the new features, you’ll also help us find bugs and release the highest quality software.
If you are not already a beta tester, perhaps you’d like to consider becoming one. You’ll get early access to the latest new features, and you can test if your project is compatible with the new beta. The beta release is available for free to all Unity users, including Personal Edition users.
As a starting point, have a look at this guide to being an effective beta tester to get an overview. We also encourage you to sign up for the optional beta tester email list below. Signing up will enable us to send you notifications when new versions are available, as well as tips on how to be an effective beta tester and news on the beta.
We’re looking forward to talking to you about the beta and reading your feedback on our 2019.1 beta forum as well.
In collaboration with PiXYZ, you can now host your AEC projects in Unity. This post explains how to use the PiXYZ Plugin for Unity and what you can achieve in real-time.
For architects, engineers, and construction (AEC) professionals, Unity is the ideal platform to host your creative projects. Whether you are creating photorealistic walkthroughs of environments allowing customers to envision spaces, getting everyone on the same page regardless of their background or skill level, or catching mistakes that cost time and money, you can make the right decisions before a yard of concrete is poured. Now we’re taking it one step further in order to make your lives even easier in Unity 2018.3.
As you may know, we’ve been closely collaborating with PiXYZ. For example, with the PiXYZ Plugin and PiXYZ Studio for Unity, you can now import IFC BIM files into Unity. So in this post, we’ll demonstrate how to do that, as well as walk you through the other new features and visualization enhancements. Also, we’ll show you how to easily make adjustments in real-time.
Note that the 2018.3 versions of these PiXYZ products are available today from the Unity Store.
The PiXYZ Plugin for Unity is part of a software suite created by PiXYZ that is designed to edit, optimize, and import CAD files and Building Information Modeling (BIM) data straight into Unity.
Until recently, the PiXYZ Plugin was mainly used to integrate CAD models into Unity. With the Preview release of PiXYZ 2018.2 and full rollout in PiXYZ 2018.3, you can now use the Industry Foundation Class (IFC) format to import your BIM data. This bridge allows you to harness the power of BIM metadata, and optimize and create real-time renderings of CAD models using numerous Unity features.
To get more details about PiXYZ and to see a live demonstration, check out their presentation from last year’s Unite Los Angeles, where they provided a comprehensive showcase of the product. In addition to their session, you can watch a shorter walkthrough of a demo Project featuring Unity’s own Mike Geig, Head of Evangelism Content. In his presentation, Mike highlights the steps involved to import the data, perform spot corrections by merging geometries, use rules, adjust materials and lighting with Prefabs, and do a bit of post-processing.
This section walks you through how to import a CAD file into Unity with the PiXYZ Plugin using the .ifc file format, demonstrating how CAD data can be imported into Unity from your favorite design application. While this focuses on exporting from Autodesk Revit, note that with the IFC format, BIM data can be imported from any program that supports IFC, such as Civil 3D, Tekla, and ArchiCAD.
When you have access to PiXYZ, you can bring a CAD model into Unity by choosing PiXYZ > Import Model as seen in the image below. The Import CAD window lets you choose parameters that determine the overall output of the imported model.
Next, we’re going to walk through a few import parameters to consider when importing a CAD model.
This determines the size of the imported model in Unity. For this example, meters was chosen as the unit of measurement in Autodesk Revit.
Autodesk products use “Z for Up,” whereas Unity uses “Y for Up.” Stating that Z is Up ensures that your imported model will retain the correct orientation.
If active, metadata integrated into the model will be imported into Unity. More information on how this is stored can be found in the section on BIM Metadata below.
Note: Without this parameter set to Active, Unity will not import the crucial BIM data that enables the use of the Rule Engine feature.
Create Ch.0 UVs
Create a new primary UV set: Channel .0. Ensure that this parameter is enabled, so textures map correctly in Unity.
Generate Lightmap UVs
This is only available once you have enabled the generation of Ch.0.
If left as Default (None (Shader)), the PiXYZ Plugin will use the current render pipeline shader to create the materials imported into Unity. Therefore, if you want to use a specific shader for your imports, simply drag it onto this property.
This generates a Prefab in a designated folder within the Project. Additionally, the New Improved Prefab workflow will be accessible within this Prefab, only available in Unity 2018.3 or later.
By default, PiXYZ will produce a folder called “3D Models” and store the Prefab inside. The generated folder containing the Prefabs can be changed in Project Preferences (Edit > Preferences > PiXYZ).
With Live Sync active, multiple parameters are accessible within the Preferences of the current Project. Under “Live Sync Beta,” three Update Mode preferences are available:
Ask on Change: When a File is overwritten or edited, Unity prompts you to confirm any edits before updating.
Auto Update: Automatically updates a model without a prompt when a model is altered.
Manual: Manually requires resynchronizing.
It’s generally good practice to create a Prefab of the model, as this not only retains a copy of the model in your Project folders but enables the PiXYZ Live Sync Feature.
In the next section, we’ll do a deeper dive into Live Sync.
Once you’ve set all the relevant parameters, choose Import, and the file will be added to your current scene as a Prefab. For additional information regarding the Import CAD window, see this documentation from PiXYZ.
Deep dive into Live Sync
Live Sync, when activated, is a valuable feature for maintaining synchronization with small to medium-sized models and your Unity scene. This updates models within a Unity project with any edits made to the imported file, saving you a lot of time in this quick and efficient solution. Also, Live Sync is compatible with any file format PiXYZ can import (for example, .IFC, .OBJ). Let’s walk you through it.
Within the designated folder, some noticeable changes will be prominent. At the top left of the Models/Prefabs icon, a status symbol indicates the current state of the Prefab as seen below.
Green Tick: Prefab is in sync with File and the Model is up to date.
Yellow Arrows: Prefab is not in sync with File, and the Model needs to be resynchronized.
Red X: Model does not exist.
Remember, the Live Sync Settings can be found by clicking the button on the top right of the Model/Prefab icon.
A pop-up window appears with information such as File Path of the Model on the machine. As well, it contains two fields where you can assign assets created for use within the Rule Engine.
See the Rule Engine section below for more information about the use of both Preset and Rules Interaction fields.
Once you’ve imported your model into the project – in this case, an IFC containing BIM data – you can find the Prefab in both the Project folder and the Hierarchy, as mentioned. By expanding the Prefab within the Hierarchy window (hold ALT and click; the drop-down arrow to the right of the Prefab in the Hierarchy expands all children), all of the meshes of the imported model are children of empty GameObjects.
The meshes themselves don’t contain any BIM data, they simply have a Mesh Renderer component and the material imported from Revit. It’s the empty parent GameObjects that contain the BIM data.
When interacting with empty parent GameObjects, the Inspector displays a table containing any BIM data exported from Autodesk Revit or other BIM software packages. The BIM data displayed corresponds to BIM data within Autodesk Revit. For example “/Reference” is the name of an object’s Type Properties in Autodesk Revit.
You can then use this data within a Rule for imported objects within a Scene.
Introduced in Unity 2018.3, the PiXYZ Toolbox provides you with a process to amplify and optimize geometry in an efficient manner. A tool within the Toolbox can be applied to any currently selected object within the Scene by simply right-clicking within the scene Window. With the Toolbox, you can easily manipulate objects without needing a RuleEngine Rule Set.
Note: Once the Toolbox applies an action, it cannot be undone.
Additionally, you can use the Toolbox on any GameObject within the Scene, including cameras and light emitters. To access the Manual Toolbox, right-click the selected objects either within the Scene window or within the Hierarchy.
Unlike the Toolbox’s manual process, the Rule Engine enables an automated process of applying a set of Rules that determine the output of a model either during import or In-Editor automatically through user-defined preferences. Upon running a Rule, it is administered sequentially according to custom Rules.
Also, you can form a Rule in Unity and state what its function is and, when applied, it carries out the functions to a selected model. In this case, an IFC containing BIM data from Revit. For example, a Rule could replace the material of the model, replace the model, and even add to what is already there. Most importantly, this is all done with one click. Let’s walk through the steps below.
Using the Rule Engine
To create a new Rule Engine Set, either choose Create > PiXYZ > RuleEngine Rule Set
Or choose PiXYZ > Rule Engine > Create New Rule Set
Once a New Rule Set has been generated within a designated folder, clicking it will enable new Rules to be constructed within the Inspector. Clicking the Plus (+) icon produces the foundation for a Rule.
At the start of every Rule, you need to specify what is being included in the Rule. Click the Plus icon and select Get. There are three options:
All Game Objects: The Rule includes all GameObjects in the Scene (light emitters, cameras, etc).
Imported Models: The Rule incorporates all imported models in the Scene.
Latest Imported Model: The Rule includes the most recently imported model in the Scene.
The Rule can now be further expanded since the user has full access to the Toolbox Actions within the Rule. This facilitates a Unity Project to become the platform for real-time commercial architectural visualizations. Treated as a foundation project, Rules can be reused for varying models.
By utilizing preset naming conventions established in Autodesk Revit or BIM software packages, which sync up with Rules established within Unity, visualizations can be easily instated by running a Rule once.
Additionally, you can integrate a custom script into the PiXYZ Toolbox and use it within a Rule.
In addition to generating a Rule Set, implementing an Import Settings Asset will provide a vast array of import preferences, all retained within an Asset. The Asset contains the same parameters found within the Import CAD window, excluding Post-Processing and the Preset selection box.
Both the Import Settings and Rule Assets can be used in two ways:
Either as the principal preset and Rule for the importing options when applied as the preset within the Import CAD window.
Or as the Preset and Rule for Resynchronization of the imported model when using Live Sync.
Now that you have a firm grasp of the Rule Engine, let’s create a Rule that replaces imported materials with Unity Materials. Here are two ways of doing this.
Replace Material Rule #1
To create a New RuleEngine Rule Set, chooseCreate>PiXYZ>RuleEngine Rule Set.
Next, add a Start node to determine what is being influenced by the Rule. Click thePlusicon and selectGetthen:
All Game Objects
Latest Imported Model
Within Autodesk Revit, naming conventions were established within the object’s Type Properties. The object’s Type name has been replaced with a custom name that exports into Unity as “/Reference” within the IFC.
Then, in order to access the relevant BIM data within the project, we need to filter through the data. Click thePlusicon, then selectFilter>On Property.
Under Property Name, input the name of the data type you want to filter through. As mentioned in the “Handling the BIM metadata” section, the Type Properties, Type name has been replaced with a custom name in Autodesk Revit, which imports into Unity as “/Reference”.
Under Property Value, input the custom name or a word from the custom name that is unique to that type of object(s) in the project. For example, replacing the imported material for an interior window with a Unity material.
Due to the naming conventions established, “Interior-Window” is only present within the name of transparent windows within the model. This enables Interior_Window to be used as a Property value. The name used for this example is “Btn_02_Interior_Window_01”.
Now the Rule will filter through all the BIM data in the project, under the Property Name “Reference,” looking for any names containing “Interior_Window”.
Currently, this rule only locates the empty GameObjects within the hierarchy as they are storing the BIM data, so you need to then click the Plus icon and select Get > Children. Make sure to enable “Recursive,” as this will ensure all child meshes are included, no matter the quantity.
Finally, click thePlusicon and selectSet>Material.
Set with a Unity Material by either clicking the Selection box button on the far right or by dragging a new Material into the box.
At the bottom of the Inspector, click Run to apply all Rules within the RuleEngine Rule Set.
Note: Rules are applied sequentially, from top to bottom.
This Rule was structured so it can be used for other processes than just to replace material. Now the Rule calls imported objects with defined BIM data types, and further Rules can be applied to extend the functions.
Replace Material Rule #2
Add a Start node to determine what is being influenced by the Rule.
Next, click the Plus icon, then select Modify > Switch Materials.
Extend Switch Materials by clicking the 3-dot icon in the far-right corner. A drop-down menu with multiple options for the Rule appears:
Add Selected Materials: Adds any materials that are currently selected in the Scene.
Fill With All Materials In Scene: Takes all materials currently in the Scene.
By choosing “Fill With All Materials In Scene,” the Rule Engine takes all the materials and deposits them in a table within the Rule. This results in an efficient means of replacing materials simply by placing Unity materials within the blank Selection boxes.
Finally, simplyRunthe resulting Rule to replace Materials within the Scene.
So that was a detailed walkthrough of the updated PiXYZ features in 2018.3, including importing BIM data, setting the necessary parameters, covering Live Sync, Toolbox, Rule Engine, and finally, learning how to Replace Material Rules. We hope you enjoyed learning how to use the PiXYZ Plugin to import Revit data into Unity. This is the result of a very close collaboration between Unity and PiXYZ, in order to significantly improve the import process and save you a lot of time.
I’d like to remind you to watch the PiXYZ presentation from last year’s Unite Los Angeles, as it is a valuable walkthrough of the capabilities. Also, check out Mike Geig’s walkthrough of a Project that highlights how to import, perform spot corrections, and do a bit of post-processing.
We’re always listening to customers, and encourage you to post your ideas and requests in the comment section below. Or, if you want to talk with us directly or get access to a trial version, please contact us. If you want to get started now, please visit our store to purchase PiXYZ.
Finally, check out my Twitter @KieranColenutt for ongoing Unity tips, tricks and future AEC content.
In October, we released our most comprehensive monetization solution to date,Monetization SDK 3.0. It’s the first SDK we’ve released that goes beyond ads – where Unity can tie together ads and in-app purchases into a single auction. This serves the most relevant format to each player based on their lifetime value (LTV). Also included were two additional ad formats: banners and AR. Now let’s dive into why these new formats are a solid addition to your monetization strategy.
Why banners? Why now?
You might be thinking: Why is Unity supporting banners now? What happened to all the talk about why rewarded video is such a powerful ad format? Don’t get us wrong – full-screen video ads are still players’ most-preferred format and they continue to drive great results. However, the truth is that banners remain a large source of revenue across the industry, and it’s the format that developers most often ask us to support. We recognize that banners are not a new or sexy ad format, but we understand the importance they have within a game’s monetization stack. Supporting banners is another way Unity continues its mission to enable developers’ success.
How do Unity banners work?
Stupid Zombies banner
Banner ads are a form of smaller-sized graphical ads, which typically include static images and text to convey a marketing message. Banners appear at the bottom of the screen, and you can determine when they appear during gameplay. They are a completely separate placement from other ad units and, once you implement them, you can view associated banner revenue in theOperate dashboard. Unity currently supports 320x50px static banners. Since this is a new ad format for us, we do expect fill to ramp up over time.
How do you set up Unity banners?
Begin by reading theseinstructionsof how to set up banner placements, then update by downloading and importing the appropriate Monetization Asset package or SDK for your environment:Unity(C#),iOS(Objective-C) orAndroid(Java).
AR Ads: Why do they matter?
Augmented reality (AR) has captured a lot of interest from consumers and developers over the past few years. AR had a breakout hit with Pokemon Go and started to reach mainstream audiences thanks to Apple’s ARKit and Google’s ARCore. At the end of 2018, AR Insider estimated that there are 900-million AR-compatible smartphone devices and 129-million monthly active users.
The ability to integrate AR ads into a game presents an exciting opportunity for developers, advertisers, and players, as the format allows brands and advertisers to connect with consumers in new, immersive ways. Unity’s AR Ads are completely opt-in, and the ads can run in rewarded or non-rewarded placements. Once the user opts in, the experience becomes a true two-way interaction and is highly engaging.
Initial AR ad campaigns have shown promising results.We partnered with an entertainment brand and Nielsen Mobile Brand Effect (MBE) to conduct a study around an AR campaign that aimed to drive awareness for their new TV show. The AR campaign allowed users to immerse themselves in the characters’ world and drove a 70% lift in aided awareness of the new show, more than 7 times the benchmark of 9.7%.
Game publishers looking to grow their apps have also expressed interest in running AR ads in addition to video and playable ads.
How do they work?
At the time of an ad request during gameplay, the AR ad asks the player for permission to use their camera in order to experience the ad. Once the player opts-in, they can begin to engage with the interactive content. In this example, the player is able to see the product come to life, change the colors and style, and then virtually try out the product. Their experience concludes with an end card and a call to action (CTA) to learn more about the product before gameplay resumes.
AR ads can be rewarded or non-rewarded and can be implemented wherever a video or interstitial would be placed – it’s up to you as the developer to decide what makes the most sense within the experience. The opt-in permission flow for the ad unit is handled by the SDK. Users can opt-into the experience, learn more about AR ads or opt-out of the experience.
Who do AR ads make the most sense for?
AR ads obviously have a natural fit within AR games as well as in games where the player has already enabled the camera. In these cases, the player is comfortable using their camera and has already given initial permission. The ad unit itself will always ask for explicit permission to use the camera with the ad unit but the user is probably already in a receptive mindset.
If the user has not already granted permission for the camera, the ad unit asks for permission at the time of the AR ad impression. We encourage developers to consider the user flow and optimal experience for the player. AR ads are meant to be an additive experience for the game and the user and are not designed to take away from gameplay.
How to get started?
Begin by reading theseinstructionsof how to set up AR placements, then update by downloading and importing the appropriate Monetization Asset package or SDK for your environment:Unity(C#),iOS(Objective-C) orAndroid(Java).
Light & Shadows, based in Paris, has used HDRP to product stunningly realistic real-time images and videos. This blog post from Light & Shadows walks through the processes they’ve used to achieve these excellent results.
History of Light and Shadows
Unity has worked closely with Light & Shadows on projects to demonstrate lifelike visual quality in real time. This blog post has been written by Light & Shadows to provide technical insight into how they used Unity’s new HD Render Pipeline to produce an amazingly realistic video of the Lexus LC 500 in real time. Light & Shadows was founded in 2009 in response to growing demand by major industrial companies for compelling visual content. From its founding, the company has thrived by continuously adapting and innovating the delivery of new capabilities to its customers, including the generation of high-quality rendering together with cost reduction and productivity improvement. The rest of this blog post is from Light & Shadows.
Real-time rendering: A game changer for the automotive industry
Technology is transforming the way we experience, sell and buy cars. Light and Shadows has deep experience with automotive visualization and offline rendering, as evidenced by the PSA car configurator and projects with Dassault Aviation. Although we primarily used a different real-time engine until recently, we have now partnered with Unity to deliver real-time rendering results using HDRP that achieve new levels of visual quality and performance. To prove these new capabilities, we recently created a video demonstrating real-time rendering of a Lexus LC500.
The project from start to finish
CAD data preparation tools and process
For the LC500 project, Lexus provided us with access to boundary representation (B-rep) CAD data for all visible surfaces and certain internal geometry. This was helpful, as we didn’t need to do any hard surface modeling. However, the model is very complex and so organizing and preparing the data was a challenge. To meet this challenge, we selected PiXYZ software for its advanced tessellation and scriptable data preparation features.
As is typical of most complex products, the model of the Lexus is organized into an extensive hierarchy of objects, in this case representing thousands of components of the car. We decided to separate the outside and inside of the car, which gave us more flexibility and allowed us to work on the two parts of the model in parallel. We were also able to segregate design content including the powertrain, chassis, and vehicle structural components.
The source data is not organized as a single vehicle per file, but instead as a collection of parts covering all the geometric options of the same vehicle. This provided us the opportunity to organize the data in a logical way to mirror the variants (options) available to a Lexus customer. We used an XML file to logically connect each part in the source data with the relevant option logic. Using this XML together with a custom script, we were able to isolate the different variants of the vehicle into a form where a visual representation of each option combination can be readily assembled.
Once the different parts were isolated, the next step was tessellation. This process involves transforming CAD data (B-rep) into a tessellated form (triangles) which can be used in applications such as 3ds Max or Unity. By using PiXYZ software, we were able to produce relatively lightweight tessellated models and still produce excellent visual quality.
Options to achieve optimal lighting and baking
To achieve the very high standard for visual quality that we set for ourselves, we used lightmaps to enhance lighting. As part of this workflow, we needed to unwrap every single part of the car without any overlaps. We used automatic unwrapping tools together with interactive (manual) unwrapping where needed to optimize seam placement.
UVs wrapped across visible surfaces
We evaluated two options to compute the lightmaps – directly in Unity with its built-in lightmapper, and in 3ds Max using a third party renderer such as Octane or V-Ray. We decided to evaluate both methods so we could compare the quality level resulting from each, and to test the workflow to integrate external lightmaps with Unity’s built-in lightmapper. The interior lightmaps of the vehicle were calculated directly in Unity, and with the right settings, the results were very convincing. The direct lighting is provided by real-time lights, while the indirect lighting is baked. With this technique, we are able to provide realistic visuals even when animating interactive parts of the scene such as turning the steering wheel and opening the doors. Using the built-in lightmapper in Unity was very straightforward and yielded excellent results.
Interior with and without lightmaps
We chose to experiment with the Octane lightmapper to interactively tailor specular occlusion for each exterior object in the model. We edited the standard shader in HDRP to properly incorporate a separated UV channel. The result was satisfying and enabled us to generate a visual result which is very realistic, especially in gaps between the body panels. Using these two approaches allowed us to evaluate tradeoffs between the flexibility of an external lightmapper and the ease of use of the integrated lightmapper. Ultimately we concluded that we could achieve the desired results with either approach.
Exterior with and without lightmaps
Tailored materials to achieve photorealism
With our years of experience in the world of automotive offline rendering, we have accumulated a large collection of high-res textures in the form of diffuse maps, height maps, specular maps, normal maps and more. For this Lexus project we wanted to use our favorite maps to get the best results, but to optimize performance Unity needs metallic, smoothness and AO maps combined into the RGB channels of a single map. This approach is optimal for performance but creates challenges to interactively tweak maps and shaders precisely. Because the Shader Graph wasn’t available for HDRP when we started this project, we asked our development team to create a custom texture editor tool allowing us to load and tweak each map in our textures independently and directly in the standard HDRP shader. This editor tool enabled us to work very efficiently by fine-tuning materials directly in Unity. Now that the Shader Graph is available for HDRP, this same capability is available to all Unity customers.
Custom texture editing
Post-processing is essential to give a film-like and realistic feeling to a 3D scene. HDRP provides many available options to tweak the final look of a scene, including color grading, bloom, vignetting, depth of field, and more.
Door interior with and without post-processing
Depth of field is an essential visual effect to achieve realistic visualization. It helps the viewer to focus on a specific area and feel immersed in the scene. Applying fixed values for the depth of field focus point within a real-time app could produce unnatural results, so we made a small camera script that automatically sets the focus on the closest object in front of the camera. It was very easy in Unity for us to cast rays within the scene to determine the closest object.
Interior with and without depth-of-field effect
We used Cinemachine in Unity to make what we call a “Demo” mode, a kind of cinematic showcase mode with predefined camera paths, to show key features of the product even when nobody is actively interacting with the car.
To accurately represent the car, we provided a way to manage its many different configurations. Not only did we need to switch among variants in the runtime (executable) app, but also within the Unity Editor, to ensure that all variants were correctly defined. To achieve this, we created a configuration management script that records “scene states” and provides automatic material assignment.
Variant configuration logic
Four different interior variants
Our experience with this project has given us the confidence to use HDRP to pursue new customer projects demanding the highest level of visual quality. We are able to deploy new projects very efficiently through the use of PiXYZ for data preparation together with the latest rendering, cinematic, and post-processing tools in Unity and our own in-house custom tools and scripts.
We at Unity would like to thank Light & Shadows for this blog post and for the great work they did on this video. More information about Unity’s solutions for Automotive and Transportation can be found here.
Last month, several members from the AI @ Unity team were present at NeurIPS in Montreal. At the Unity booth, we had the opportunity to meet hundreds of researchers and introduce them to Artificial Intelligence and Machine Learning projects at Unity. Later this month, we’re heading to AAAI-19 (an annual AI conference) in Honolulu where we’ll be hosting a booth, and also co-organizing the AAAI-19 Workshop on Games and Simulations for Artificial Intelligence. In this blog post, we’ll provide you with a brief overview of the workshop and explain why we are eager to foster research that leverages games and simulation platforms.
If you’re attending AAAI, consider joining our workshop on January 28 – it’s packed with fantastic speakers and papers covering games and simulations for AI. Also, drop by our booth (January 29 – 31) to say hi, watch some demos, and learn about teams and projects at Unity.
A brief history of games in AI research
Games have a long history in AI research, dating back to at least 1949 when Claude Shannon (shortly after developing information entropy) got interested in writing a computer program to play the game of Chess. In his paper “Programming a Computer for Playing Chess”, Shannon writes:
“The chess machine is an ideal one to start with, since: (1) the problem is sharply defined both in allowed operations (the moves) and in the ultimate goal (checkmate); (2) it is neither so simple as to be trivial nor too difficult for satisfactory solution; (3) chess is generally considered to require “thinking” for skilful[sic] play; a solution of this problem will force us either to admit the possibility of a mechanized thinking or to further restrict our concept of “thinking”; (4) the discrete structure of chess fits well into the digital nature of modern computers.”
That was in 1949. Since then, there has been an enduring interest in creating computer programs that can play games as skillfully as human players, even beating respective world champions. Shannon inspired Arthur Samuel’s seminal work on Checkers in the 1950’s and 1960’s. While Samuel’s program was unable to beat expert players, it was considered a major achievement as it was the first program to effectively utilize heuristic search procedures and learning-based methods. The first success story of achieving expert-level ability was Chinook, a checkers program developed at the University of Alberta in 1989 that began beating most human players and by 1994 the best players could at best play to a draw. This trend continued with other 2-player board games such as Backgammon (with Gerald Tesauro’s TD-Gammon, 1992-2002) and Chess (when IBM’s Deep Blue beat Garry Kasparov, 1997), and most recently with Go. An important scientific breakthrough of the last few years was when, in 2016, DeepMind’s AlphaGo beat 18-time world champion Lee Sedol 4 to 1, the subject of the Netflix documentary, AlphaGo.
The progress over the last 70 years since Claude Shannon’s paper has not been limited to solving increasingly more difficult 2-player board games but has expanded to other complex scenarios. These include 3D multiplayer games such as Starcraft II and Dota 2 and more challenging game tasks such as learning to play Doom and Atari 2600 games using only the raw screen pixel inputs instead of a hand-coded representation of the game state. In a 2015 Nature paper, DeepMind presented a deep reinforcement learning system, termed deep Q-network (DQN), that was able to achieve superhuman performance on a number of Atari 2600 games using only the raw screen pixel inputs. What was particularly remarkable was how a single system (fixed input/output spaces, algorithm, and parameters), trained independently on each game, was able to perform well on such a large number of diverse games. More recently, OpenAI developed OpenAI Five, a team of five neural networks that can compete with amateur players in Dota 2.
The effectiveness of game engines & simulation platforms
It’s not just games that have played a central role in AI development. Game engines (and other simulation platforms) themselves are now becoming a powerful tool for researchers across many disciplines such as robotics, computer vision, autonomous vehicles, and natural language understanding.
A primary reason for adopting game engines for AI research is the ability to generate large amounts of synthetic data. This is exceptionally powerful as recent advances in AI and the availability of managed hardware in the cloud (e.g. GPUs, TPUs) have resulted in algorithms that can efficiently leverage huge volumes of data. Our partnership with DeepMind is one example of a premier research lab fully investing in utilizing virtual worlds to study AI. The use of game engines is even more profound in scenarios in which data set generation in the real world is prohibitively expensive or dangerous. A second reason for adopting game engines is their rendering quality and physics fidelity which enables the study of real-world problems in a safe and controlled environment. It also enables models trained on synthetic data to be transferred to the real world with minimal changes. A common example is training self-driving cars and Baidu’s move to leverage Unity to evaluate its algorithms is representative of an ongoing shift to embrace modern game engines.
AI is dubbed the new electricity due to its potential to transform multiple industries. We foresee game engines and simulation platforms playing a very important role in that transformation. This is evident by the large number of platforms that have recently been created to study a number of research problems such as playing video games, physics-based control, locomotion, 3D pose estimation, natural language instruction following, embodied question answering, and autonomous vehicles (e.g. Arcade Learning Environment, Starcraft II Learning Environment, ViZDoom, General Video Game AI, MuJoCo, Gibson, Allen Institute AI2-Thor, Facebook House3D, Microsoft AirSim, CARLA). The list also includes our own Unity ML-Agents Toolkit which can be used to transform any Unity scene into a learning environment to train intelligent agents using deep reinforcement learning and imitation learning algorithms. Consequently, we’re eager to encourage and foster AI research that leverages games and simulation platforms.
AAAI-19 workshop overview
At AAAI, later this month, we are co-organizing the Workshop in Games and Simulations for AI with Julian Togelius (Professor at New York University) and Roozbeh Mottaghi (Research Scientist at the Allen Institute for Artificial Intelligence). The workshop will include a full day of presentations by invited speakers and authors of peer-reviewed papers. The presentations will cover a number of topics including large-scale training of deep reinforcement learning systems such as AlphaGo, high-performance rendering for learning robot dexterity, learning to map natural language to controls of a quadcopter, and using drones to protect wildlife in the African savannah. If you are attending AAAI, join us at the workshop to learn more about how games and simulations are being used to power AI research.
Do you have any older projects that use UnityScript? If so, you might be interested in our open source UnityScript to C# conversion tool available to download now.
Since then we’ve collected a lot of feedback and fixed a bunch of issues in the conversion tool. To name a few changes motivated by your feedback:
Support for preserving comments from original scripts
Improved support for a number of UnityScript constructs
With Unity 2018.2, we removed the option to create new UnityScript scripts completely. Now we believe that the conversion tool is feature complete and stable enough to help any of you that still have projects using UnityScript.
Before starting the conversion process, we suggest that you make sure your project builds cleanly on Unity 2018.1, all of your tests are passing and you understand the limitations listedhere.
We recommend running the conversion tool through the menuTools/Convert UnityScript to C#. If you need more control over the parameters used during the conversion you can run the conversion through thecommand linebut keep in mind that the extra flexibility comes with extra complexity.
As an example, the video below outlines the process of convertingan older version of Angry Botsproject. The intent of this is only for demonstration purposes whence we simply commented out any code causing compilation errors when first open the project in 2018.1.
To summarize, the basic process should be something like:
Backup your project
Open project in 2018.1
Accept API Updater offer (if any) and fix remaining errors
Make sure player builds successfully on each target platform
Make sure all related tests are passing (also, run the project on real devices)
Note that if your project targets multiple platforms, you may need to repeat step 8 for each target platform by selecting the platform before running the conversion tool, and then manually merge the converted code, wrapping the code withconditional codeaccordingly. This is a limitation of the tool. In this case, you’ll probably use a VCS to restore the state of the project after each conversion.
Step 9 may or may not be required (it depends on which APIs your project uses), but in the Angry Bots example, we needed to fix some API usages.
Since the tool is open source, you’re invited to download its source and look around! Please feel free to contribute any fixes/improvements.
If for any reason this tool does not fit your needs you may want to check other converters available on theAsset Store.
Finally, if you need any help with the tool please ask inthis forum threadand we’ll do our best to help you.
DirectX 12 is the latest version of Microsoft’s graphics API designed to enable a reduction of driver overhead, allowing better use of multi-core systems. Depending on your project, you could see some great performance improvements with DirectX 12. For example, with our Book of the Dead: Environment scene we have seen a frame rate increase of over 8% running at 1440p on Xbox One X.
DirectX 12 on Xbox One brings with it Unity’s new Native Graphics Jobs, which also contributes a significant CPU performance improvement. Alongside these performance benefits, DirectX 12 also brings support for new rendering techniques in Unity, starting with Async Compute, which is available with DirectX 12 on Xbox One out of the box. Async Compute provides valuable GPU performance improvements for any titles that make use of compute on Xbox One.
Enabling DirectX 12
In Unity 2018.3, you will need to enable DirectX 12 to use it. Change yourPlayer Settingslocated in theEditmenu.
Disable theAuto Graphics APIcheckbox, addXboxOneD3D12 (Experimental)to theGraphics APIlist, and then removeXboxOnefrom the same list.
Throughout 2019 we will make DirectX 12 the default for all new projects. While we will maintain DirectX 11 for the foreseeable future, our primary focus is on improving the performance and feature set of DirectX 12.
If you encounter any problems with DirectX 12 please contact our support team or post on the Xbox One forums and we’ll be happy to look into this issue for you. We are ready and waiting to act on your feedback!
How do I bring my game to Xbox One?
Unity for Xbox One is available to anyone with a Xbox One development kit. Head over to www.xbox.com/id to sign up for Microsoft’s independent developer publishing program for Xbox One, ID@Xbox.
Once you’ve registered as a Microsoft developer, a representative will be in contact with you to let you know when you’ve been given access to the Xbox One Development forum for Unity. Here you can find an active community of other developers, get support from Unity, and download the Xbox One Unity editor add-on.