Categories
Playstation Lifestyle

Rainbow Six Siege Year 5 Teaser Reveals Operation Void Edge

 

Playstation Lifestyle –

Rainbow Six Siege Year 5 Teaser Reveals Operation Void Edge

 

Ubisoft isn’t ready to fully unveil what’s to come with Rainbow Six Siege’s Year 5, but that isn’t stopping the company from teasing the upcoming batch of content. As listed on the Rainbow Six Twitter page, Year 5 will be kick off with Operation Void Edge with the tagline “Look into your darkness.” This tease comes just before the R6 Invitational begins, which will run from February 11 to February 16, 2020. This Rainbow Six Siege event will feature tournaments and content reveals for the upcoming year, including more details on Operation Void Edge and the game’s fifth year.

You can check out the tweet below:

Operation Void Edge is expected to feature new cosmetics, maps, and Operators—which have rumored codenames Oryx and Yana. It’s currently unconfirmed what each Operator will look and play like, but rumors suggest Oryx will have the ability to break through walls, while Yana is said to have some sort of hologram that can be used as a distraction.

It’s also unconfirmed when we can expect Operation Void Edge to release, but whispers put it around March 10, 2020. This is a Tuesday, which is historically when games and add-on content usually tend to release.

The game’s Community Management Lead François Roussel described Year 5’s content as having a “direction focused on features that benefit all players, not just those playing the newest Operators.” He also added that it will include “free events, extensive reworks, and other core gameplay features.”

Earlier this year, Year 5’s contents allegedly leaked, pointing to six new Operators, new cosmetics, a Renown boost, and other benefits. Ubisoft commented at the time saying to wait for the R6 Invitational for additional information.

Rainbow Six Siege continues its strong run five years since it launched, racking up over 55 million unique players worldwide since its release in 2015.

[Source: Twitter]

Read the original article

Categories
GameInformer Reviews

7th Sector Review – A Puzzler With A Spark

Publisher: Sometimes You
Developer: Sergey Noskov
Rating: Teen
Reviewed on: Xbox One
Also on: PlayStation 4, PC

 

 

A television set comes to life, static filling its screen. Within the noise, you can barely make out a humanoid figure. Moving the analog stick makes this faint specter move, but it’s trapped in this small box with no clear objective or interaction point. This is 7th Sector’s first puzzle, and it’s a bit of a doozy, at first making me think my game may have glitched out. Developer Sergey Noskov gives you no indication of what you need to do in this moment. Almost every puzzle is free of guidance, which can lead to moments of frustration. But more often, it leads to the satisfaction of having the insight to figure something out on your own. This is a game that lives and dies by its obtuse design.

After the first puzzle is solved in a fairly unconventional way, 7th Sector only gets stranger. You don’t take control of the human you saw or anything even close to a typical game character. You become an ordinary electrical spark. It can’t emote or do anything other than travel along cables to devices that it can bring to life. As the spark moves through the world, you see a dystopian cyberpunk story unfold in the background, catching glimpses of robots warring with humans and even more distressing and fascinating things. This is a clever way of telling a story, but it doesn’t deliver much excitement or build up until the final act, which concludes with a great reveal in the vein of The Matrix.

 

Click here to watch embedded media

The spark of note is used effectively for clever navigational puzzles along a side-scrolling setting that push you to figure out how to reach other cables or to open doors. Just when it seems like the puzzles mostly revolve around figuring out patterns or using the correct timing, Noskov (who is a one-person development team) throws math at you. You are asked to solve fairly simple math problems, like figuring out which group of numbers adds up to 220 (a specific number used often in 7th Sector), but you are also presented with division and multiplication tests. If a math problem is too hard, you can randomly click numbers to brute-force the solution without negative consequences (as I did once). That said, by the time the credits rolled, I looked back on the math and found it connected nicely to what Noskov is trying to reveal in the story. It’s quite clever.

Just as I was getting used to controlling my emotionless blue spark, the gameplay changes completely to power up a robotic ball, which you then control. Outside of more math, the puzzles become entirely different at this point, yet just as fun and directionless, only now involving more physics-based actions. Just when it seemed I had the ball gameplay down, the spark jumps into another robot. This one is much bigger and has guns, which leads to some combat amidst even more puzzles. The combat isn’t great, and is easily the weakest part of the game, but it is used sparingly and is only mildly frustrating.

Click image thumbnails to view larger version

The spark moves on to other entities as well, but it wouldn’t be fair of me to spoil where this adventure goes next. In the three to four hours it took to complete 7th Sector, I was eager to see what would happen next, all while cursing math and not knowing what I needed to do. I got stumped a few times, but the puzzle spaces are small and most of their interactive elements are easy to spot.

I’m a big fan of Playdead’s Limbo and Inside, and 7th Sector scratches the same kind of itch, but in much stranger and mathematical ways. It’s a journey worth taking, but just know you’re often left directionless and perhaps in need of a calculator.

Score: 7.5

Summary: Robots, electricity, and math all come into play in this unique side-scrolling puzzle game.

Concept: A side-scrolling adventure that delivers little direction in its puzzles, yet makes you feel great when solving them

Graphics: The dark cyberpunk setting sells the mood, and the backdrops effectively tell the story

Sound: The score is appropriately empty and eerie. Sound is sometimes used to help guide you through puzzles

Playability: Outside of having to solve math problems, many of the puzzles are clever one-offs, meaning you almost always have new challenges to look forward to

Entertainment: The story delivers a nice payoff at the end, making the difficult puzzles worth the time

Replay: Moderately Low

https://www.gameinformer.com/review/7th-sector/7th-sector-review-a-puzzler-with-a-spark

Categories
Playstation Nation Trailers

Call of Duty: Modern Warfare Season Two Begins Tuesday

10. Feb, 2020

 

PlayStation Nation –

Call of Duty: Modern Warfare Season Two Begins Tuesday

 

Check out the trailer for Call of Duty: Modern Warfare Season Two. Take a look below at the details and stay tuned to PS Nation for more news on this game and many others.

I have been playing it in split-screen with the wife since I did the Call of Duty: Modern Warfare review and look forward to jumping into the new maps.

Season Two of Modern Warfare returns Ghost to the firefight and brings more Multiplayer experiences through maps, modes, and an all-new Battle Pass. Plus earn rewards to climb through the Season Two Ranks with more ribbons to earn, new challenges to complete, and new missions for both Multiplayer and Special Ops.

In Season Two, Al-Qatala agents have stolen a Soviet nuclear warhead and smuggled it into the city of Verdansk. Combined with previously obtained chemical gas, Al-Qatala is determined to cut off Verdansk from the rest of the world. On the brink of a global catastrophe, Captain Price sends in Ghost to track down the location of the warhead and lead key Operators before it’s too late.

Battle on More Maps for Multiplayer

New battlegrounds arrive to Modern Warfare for classic Multiplayer, Ground War, and Gunfight game modes. Each map has its own distinct look and feel for a unique, tactical, and grounded gaming experience.

Rust – Standard Multiplayer and Gunfight (Launch Day)

The classic map returns! A small map for fast-paced combat, Rust brings the battle to an oil yard in the middle of the desert. The site of the Modern Warfare 2 Campaign mission, “Endgame”, may be just that for players who don’t learn the tricks to master this arid field of play. Utilize the environment for cover and grab the high ground and low ground to take advantage of your enemies.

Atlas Superstore – Standard Multiplayer (Launch Day)

Go shopping in Atlas Superstore, a new Multiplayer map that takes place in a supercenter warehouse that has been taken over by Al-Qatala forces. Battle in dense lanes of traffic, over fallen shelving, and throughout the shipping, receiving, and employee-only areas. Clean up on aisle six!

Khandor Hideout – Standard Multiplayer (Mid-Season)

[Redacted Intel]

Zhokov Boneyard – Ground War (Launch Day)

A resting place for discarded airplane parts, Zhokov Boneyard is a Ground War map in Verdansk. Traipse through this airplane junk yard and avoid the turbulence of the enemy team while capturing your objectives.

Bazaar – Gunfight (Launch Day)

A tightly contained cross-section of the streets of Urzikstan is turned into a battleground in Bazaar. Experience the tension-filled Gunfight while navigating a new zone of combat. The randomized Gunfight loadouts all have a chance to shine as the maps symmetrical layout offers opportunities for big moments and epic plays.

Some information obtained from the Activision Blog.
What do you think of this one, let us know in the Forums or on Twitter using #AskPSNation.

In other news:

Written by Chazz Harrington

Chazz Harrington

You can find me on everything: PSN, Twitter, Origin, Steam, etc using my universal ID: ChazzH69

If you send a friend request please add ‘PS Nation’ in the subject area.

 

Read the original article

Categories
Unity Technologies Blog

Achieve beautiful, scalable, and performant graphics with the Universal Render Pipeline

 

Unity Tech Blog –

Achieve beautiful, scalable, and performant graphics with the Universal Render Pipeline

 

Universal Render Pipeline is a powerful, ready-to-use solution with a full suite of artist tools for content creation. Use should use this rendering pipeline if you want to make a game that has full Unity platform reach with best-in-class visual quality and performance. We covered the benefits of the Universal Render pipeline in this blog post. In this blog post, we’ll dive into how the Universal Render Pipeline was used to create the vertical slice Boat Attack demo.  

We first created the Boat Attack demo to help us validate and test the Universal Render Pipeline (which, at the time, was known as the Lightweight Render Pipeline). Producing a vertical slice as part of our development process was also an exercise in using real-world production processes in our feature development.

We have upgraded the Boat Attack demo considerably since we first created it. It now uses many of the Universal Render Pipeline’s new graphical features, along with recent Unity features such as the C# Job System, Burst compiler, Shader Graph, Input System, and more.

You can download the Boat Attack demo now, and start using it today with Unity 2019.3.

The Demo

The Boat Attack demo is a small vertical slice of a boat-racing game. It is playable and we are continually adjusting it to take full advantage of the latest Unity features.

The demo is designed to work well on a wide variety of platforms: mid- to high-range mobile devices, all current consoles and standalone apps. We demonstrated Boat Attack live at Unite Copenhagen 2019 on a range of devices, from the iPhone 7 to the PlayStation 4.

To use the demo, we suggest you install the latest version of Unity 2019.3, and then grab the project from GitHub (make sure to read the readme for usage instructions).

Shader Graph

Shader Graph is an artist-friendly interface for creating shaders. It’s a powerful prototyping tool for technical artists. We used Shader Graph to create some of the unique shading effects in the Boat Attack demo.

Using Shader Graph allowed us to create great shading effects, and then painlessly maintain them across many versions of the Lightweight Render Pipeline and the Universal Render Pipeline.

The cliff shader in Boat Attack demonstrates the effects you can achieve using mesh data – it’s easy to get data from a mesh in Shader Graph. We use the normal vector of the mesh to draw grass on the parts of the cliff face that are flat and facing upwards, and we use the world space height of the mesh to ensure that cliffs and rocks close to the water level will not have grass.

 

From left to right: Y height mask, Y normal mask, height + normal mask remapped, final shader.

Vegetation shading

The vegetation in Boat Attack was initially a custom vertex/fragment shader, but this was painful to maintain when the render pipeline was in early development and code was changing frequently. Recreating the shader in Shader Graph let us take advantage of Shader Graph’s easy upgradeability.

This Shader Graph effect is based on an implementation from Tiago Sousa of Crytek, which makes great use of vertex colors to control wind animation via vertex displacement. In Boat Attack, we created a Sub-graph to house all the required nodes needed for calculating the wind effect. The Sub-graph contains nested Sub-graphs, which are a collection of utility graphs that perform repeating math calculations.

Individual vertex animations and their masks. From left to right: main bending from distance to origin, leaf edge from vertex color Red channel, and branches from vertex color Blue using vertex color Green channel for phase offset.

Another big part of creating believable vegetation is subsurface scattering (SSS), which is currently not available with the Universal Render Pipeline. However, to create an SSS-like effect, you can use Shader Graph’s custom function node to retrieve lighting information from Universal Render Pipeline to create your own SSS-like effect.

 

Node layout. The SSS Mask is made from the vertex color Green (leaf phase) and the albedo texture map.

The custom function node gives you a lot of creative freedom. You can read up on custom rendering techniques here, or simply grab the code for the node in the Boat Attack repository to try out your own custom lighting ideas.

 

From left to right: without SSS, SSS only, final shader.

Boat customization

The boats needed to have multiple variations of colors. In Substance Painter, two livery masks were painted and stored in a packed texture containing Metallic (red), Smoothness (green), Livery 1 (blue) and Livery 2 (alpha). Using the masks via Shader Graph, we can selectively apply coloring to these masked areas.

 

An overview of how the boats are colored. Using overlay blending allows subtle coloring to come through the base albedo map.

 

The node layout in Shader Graph, wrapped into a Sub-graph for easy use in the parent RaceBoats graph.

Houses

Boat Attack covers a full day/night cycle. To enhance this illusion, we created a Shader Graph for the windows of the buildings throughout the level. The Shader Graph lights up the windows at dusk and switches them off at dawn.

We achieved this using a simple emission texture that was mapped to a day/night value. We added an effect to slightly randomize the order, using the objects’ positions, so that the houses would light up at different times.

 

The node map that enables random emissions.

Clouds

Now that we have added changing lighting to Boat Attack, a simple high-dynamic-range imaging (HDRI) skybox is no longer sufficient. The clouds should be dynamically lit by the lighting in the Scene.

But rendering big puffy clouds in real-time is demanding, especially with the need to run on mobile hardware. Because we don’t need to see the clouds from many angles, we decided to use cards with textures to save on performance.

 

The whole current graph for rendering the clouds.

Shader Graph was crucial in prototyping the look. We baked out some volumetric cloud data from Houdini, and created fully custom lighting in Shader Graph. These clouds are still a work in progress, but they prove that a wide range of surfaces can be created with the node-based editor.

Rendering from API for seamless Planar Reflections

Unity’s goal with Scriptable Render Pipelines was to allow users to customize rendering code, instead of hiding it in a black box. Rather than simply opening up our existing rendering code, we pushed our rendering tech with new APIs and hardware in mind. 

The Universal Render Pipeline lets you extend its out-of-the-box rendering capabilities with your own C#. It exposes 4 hooks:

  • RenderPipelineManager.beginFrameRendering
  • RenderPipelineManager.beginCameraRendering
  • RenderPipelineManager.endCameraRendering
  • RenderPipelineManager.endFrameRendering

These hooks let you easily run your own code before rendering the Scene or before rendering certain Cameras. In Boat Attack, we used these hooks to implement Planar Reflections by rendering the Scene into a texture before the main frame is rendered.

Because this is a callback we subscribe to, we also unsubscribe from it in OnDisable.

Here we can see the entry point in the Planar Reflection script. This code lets us call a custom method every time Universal Render Pipeline goes to render a camera. The method we call is our ExecutePlanarReflections method:

Because we are using the [beginCameraRendering] callback, our method must take a [ScriptableRenderContext] and a [Camera] as its parameters. This data is piped through with the callback, and it will let us know which Camera is about to render.

For the most part, the code here is the same code as you would normally use to implement planar reflections: you are dealing with cameras and matrices. The only difference is that Universal Render Pipeline provides a new API for rendering a camera. 

The full method for implementing planar reflections is as follows:

Here we use the new [UniversalRenderPipeline.RenderSingleCamera()] method to render a given camera. In this case, the camera is our Planar Reflection Camera.

Since this camera renders to a texture (which we set using [Camera.targetTexture]), we now get a RenderTexture we can use in our water shading later in the rendering. Check out the whole PlanarReflection script on the GitHub page.

 

Planar reflection composition. From left to right: raw planar reflection camera output, fresnel darkening and normal offsetting, final water shader, water shader without planar reflections.

These callbacks are used here to invoke some rendering, but they can be used for several things. For example, we also use them to disable shadows on the Planar Reflection Camera, or choose which Renderer to use for a camera. Rather than hard coding the behavior in the Scene or a Prefab, using an API allows you to handle more complexity with greater control.

Injecting Custom Render Passes for specialized effects

In the Universal Render Pipeline, rendering is based upon ScriptableRenderPasses. These are instruction sets on what and how to render. Many ScriptableRenderPasses are queued together to create what we call a ScriptableRenderer.

Another part of Universal Render Pipeline is ScriptableRendererFeatures. These are essentially data containers for custom ScriptableRenderPasses and can contain any number of passes inside along with any type of data attached.

Out of the box we have two ScriptableRenderers, the ForwardRenderer and the 2DRenderer. ForwardRenderer supports injecting ScriptableRendererFeatures.

To make it easier to create ScriptableRendererFeatures, we added the ability to start with a template file, much like we do for C# MonoBehaviour scripts. You can simply right-click in the Project view and choose [Create/Rendering/Universal Pipeline/Renderer Feature]. This creates a template to help you get started. Once created, you can add your ScriptableRendererFeature to the Render Feature list on the ForwardRendererData assets.

In the Boat Attack demo, we used ScriptableRendererFeatures to add two extra rendering passes for the water rendering: one for caustics and one called WaterEffects.

Caustics

The Caustics ScriptableRendererFeature adds a pass that renders a custom caustics shader over the scene between the Opaque and Transparent passes. This is done by rendering a large quad aligned with the water to avoid rendering all the pixels that might be in the sky. The quad follows the camera but is snapped to the water height, and the shader is additively rendered over what’s on screen from the opaque pass.

 

Caustic Render Pass compositing. From left to right: depth texture, world space position reconstruction from depth, caustics texture mapped with world space position, and final blending with Opaque pass.

Using [CommandBuffer.DrawMesh], you can draw the quad, supply a matrix to position the mesh (based on water and camera coordinates), and set up the caustics material. The code looks like this:

Water effects

Split view of the WaterFXPass in action. Left, the final render; right, a debug view showing only the result of the pass on the water.

The WaterFXPass is a bit more complex. The goal for this effect was to have objects affect the water, such as making waves and foam. To achieve this, we render certain objects to an offscreen RenderTexture, using a custom shader that is able to write different information into each channel of the texture: a foam mask into red channel, normal offset X and Z into green and blue, and finally water displacement in the alpha channel.

 

WaterFXPass compositing. From left to right: final output, the green and blue channels used to create world space normals, the red channel used for a foam mask, and the alpha channel used for creating water displacement (red positive, black no change, blue negative).

First, we need a texture to render into, which we create at half resolution. Next, we create a filter for any transparent objects that have the shader pass called WaterFX. After this, we use [ScriptableRenderContext.DrawRenderers] to render the objects into the scene. The final code looks like this:

Both of these ScriptableRenderPasses live in a single ScriptableRendererFeature. This feature contains a [Create()] function that you can use to set up resources and also pass along settings from the UI. Since they are always used together when rendering water, a single feature can add them to the ForwardRendererData. You can see the full code on Github.

Future plans

We will continue to update this project throughout the Unity 2019 cycle including 19.4LTS. As of Unity 2020.1, we intend to maintain the project to make sure it runs, but we will not add any new features.

Some of the planned improvements include:

  • Finish day/night cycle (this requires more features to be integrated into Universal Render Pipeline to reduce the need for customization)
  • Refine Water UX/UI 
  • Implement Imposters
  • Continue code cleanup and performance tweaks

Useful links

Boat Attack GitHub repository 

Full 2019.3 project link (if you don’t want to use GitHub)

Universal Render Pipeline manual

Universal Render Pipeline and High Definition Render Pipeline

The Universal Render Pipeline does not replace or encompass the High Definition Render Pipeline (HDRP).

The Universal Render Pipeline aims to be the future default render pipeline for Unity. Develop once, deploy everywhere. It is more flexible and extensible, it delivers higher performance than the built-in render pipeline, and it is scalable across platforms. It also has fantastic graphics quality. 

HDRP delivers state-of-the-art graphics on high-end platforms. HDRP is best to use if your goal is more targeted – pushing graphics on high-end hardware, delivering performant powerful high-fidelity visuals. 

You should choose which render pipeline to use based on the feature and platform requirements of your project.

Start using the Universal Render Pipeline

You can start taking advantage of all the production-ready features and performance benefits today. Upgrade your projects using the upgrade tooling, or start a new project using our Universal Project Template from the Unity Hub.

Please send us feedback in the Universal Render Pipeline forum!

https://blogs.unity3d.com/2020/02/10/achieve-beautiful-scalable-and-performant-graphics-with-the-universal-render-pipeline/

Categories
CRYENGINE

Bringing The Climb to Oculus Quest

 

CRYENGINE – 

Bringing The Climb to Oculus Quest

 

Find out how we brought The Climb to Oculus Quest in our development roundtable.

Our award-winning VR rock climbing game, The Climb, launched on Oculus Quest in December of last year. The game invites players to experience the adrenaline rush of extreme free solo rock climbing as they ascend to epic heights, explore caves, and find shortcuts. Its arrival on the wireless Oculus Quest adds a new dimension to the gameplay experience, and the game is an amazing example of what CRYENGINE can achieve on a mobile chipset. We sat down with Hussein Dari, Lead Level Designer, Sebastien Laurent, Technical Director, Theodor Mader, Lead Rendering Engineer, and Fatih Özbayram, Lead Producer to find out about the development and design process.

Hey guys, thanks for joining us. How pleased are you with how The Climb plays on Oculus Quest?

Fatih: Our goal was to compromise as little as possible to ensure the same immersive and awe-inspiring experience on Quest, and I think it is fair to say this was achieved!

Theo: I really love how natural the climbing feels and how scary the heights are. And of course the impressive vistas! The Oculus Quest version is a step forward for this experience, with more freedom of movement and none of the restrictions of cables.

Hussein: Our aim was not just to port The Climb to the Quest, but also to deliver an experience as faithful as possible to the original game. The Climb runs great and still looks very good on the new device, and that’s without changing any of the gameplay or level layouts.

Sebastien: I think it’s really humbling to see what our team has achieved here. The game is visually stunning and plays beautifully. The reviews we get from the players show how impressive an achievement this is. Regarding the gameplay, one of the main differences when comparing the Quest version to the Rift and Touch combination is that the controllers can easily end up outside of the Quest’s sensor range. Due to the nature of our gameplay, this situation happens often, so we had to handle that case as gracefully as possible, and I think the result is very satisfying.

ee3a0003728f2c520e223d9f2b43c74dbb6056db1dd71b117004671d272a9a86.jpeg

How does going wireless change the experience?

Fatih: One of the most significant changes that the Quest introduces is that players can stand up and move freely as there are no cables to manage. While we had to optimize the game to fit into memory constraints and to meet the target fps to create a smooth experience, the rock climbing experience feels as immersive on Quest as it does on Rift.

Hussein: It makes climbing more fun as you aren’t limited by the distance of the cable to your PC. You can climb wherever you are. We had a lot of cases in the studio where somebody just carried their device over and showed you something cool that they discovered. Moving to the Oculus Quest also gave us some really positive things. For instance, we don’t need to “rotate” the player body while you are climbing as much as in the original game, and, as in the original, climbing with Motion Controllers is way more fun and realistic.

What were the challenges of bringing The Climb to less powerful hardware, but still ensuring an awesome experience?

Sebastien: There was a large amount of work to be done, and I want to thank our Systems team here, for they had to make sure the engine runs properly on ARM CPUs, make our first Linux-based client application (the OS on the Oculus Quest is based on Android), and they had to solve a bunch of new problems that are specific to mobile platforms such as thermal throttling and battery consumption. There was also a shared effort to make The Climb fit into the very tight memory constraints of the Oculus Quest. Even though our Systems and Rendering teams did a wonderful job making our systems more compact and eliminating overhead, we also needed to re-think our approach to building levels in The Climb to make them fit, as our initial approach was more focused on reducing loading times. The other aspect that I would like to mention is much less visible to the outside world, but still an important one. Our whole pipeline had to factor in that the new platform meant our programmers and content creators had to be as efficient as possible. That included features from project generation to build packaging, including, of course, deployment to the device.

Theo: Porting a PC game to a portable, battery-powered device like the Oculus Quest is a challenge. Not only are key characteristics like memory bandwidth and numerical computation speed significantly different to PC, but also new constraints come into play. For instance, the battery drain needs constant supervision, and without external cooling, the devices can heat up quickly if pushed too hard. The GPU architecture is also significantly different from mainstream PC GPUs, requiring customized solutions to get the best performance. It was a learning process for us, but in the end, it helped us to optimize both CRYENGINE and the CRYENGINE renderer to a whole new level.

Hussein: Many people didn’t notice that in the original version of The Climb, all the levels were part of one big level. We chose this approach because, at the very beginning of the project, we had technical difficulties in rendering loading screens. Later during production, we were able to get loading screens in, but by then, it was too late to separate the levels. We used the loading screens to make the level transitions smoother and hide render artifacts.In one of our early tests, we noticed that the original big map wouldn’t run on the Oculus Quest. So we decided to separate one of the levels out and see if the Quest could run it. After the test was successful, we knew that we had to split all 15 levels into single levels. As we used one big level before, all the level and code logic that was designed and optimized needed to be rewritten to handle cases where information has to carry from one level to another.

bf45f647b4599f2dda2f6e0b95f52946afa919b431964a3861e3832f6468ed7b.jpeg

How pleased are you with the performance of CRYENGINE on this chipset?

Sebastien: The Climb definitely proved that even though CRYENGINE has always been renowned for pushing the boundaries on high end machines, it can scale down to more modest hardware and shine. Now that we’ve paved the way, we’re excited to bring all of these learnings and improvements to our community and licensees in the future and see what they come up with. This work will definitely open up a lot of new doors and opportunities for people working with CRYENGINE in the future.

Theo: We have learnt a lot about this specific chipset during the development of The Climb Quest port, and this directly reflects in the performance of CRYENGINE on the Quest device. There are, of course, always further improvements to be made, but I believe we have nailed the most important ones while porting this game.  We are quite pleased with the outcome.Mobile, in general, is a very hot topic for CRYENGINE at the moment and while I can’t give any details right now, I can tell you that we have some very cool things to look forward to.

How did CRYENGINE help you achieve your vision?

Theo: We decided from the start to build the rendering pipeline based on the Vulkan Graphics API. Vulkan gives developers much finer control over the device than OpenGL, enabling us to squeeze every last bit of performance from the hardware. Another advantage of CRYENGINE is the strong implementation of physically based shading. It lets artists recreate almost any surface from the real world, and the faithful light-material interactions greatly help the immersion of the players in the game.

Hussein: The new version of the engine has lots of new features and optimizations, which made it easier to reach our goal. One of the cool new features is better Python support. It was very easy for us level designers to create small scripts which helped us to automate lots of processes. As mentioned, we now had 15 separate levels, and some changes had to be merged to each, sometimes with small alterations. This was solved very quickly with scripts.

Sebastien: CRYENGINE is, at its core, a cross-platform engine, and as such most systems are extendable. This was a huge help when developing a game for a new device and platform.

What do you think was the biggest achievement for the team?

Sebastien: This project involved a lot of new challenges for us. We had a new hardware platform, new operating system, new CPU architecture, new GPU architecture, new input devices, new tools, well, you get the picture. We also pushed the Vulkan mobile pipeline both for our engine and for the platform, which led to some improvements on their tools and drivers. All in all, I think that the biggest achievement was managing to release such a high quality product when facing so many unknowns.

Theo: When we started working on the project, we were not sure how much of the visual fidelity, we would be able to preserve on the Oculus Quest device. The screen resolution and the high frame rate were difficult targets to hit, and it required a lot of brainstorming and out-of-the-box thinking to come up with solutions. But in the end, we managed to bring the core visual experience and the core gameplay experience to the device.

Hussein: At Crytek, we’re known for shipping games that look amazing. We could have made our lives way easier by just reducing the quality of the game and porting it over. But our biggest achievement was to ensure the game still looks amazing and plays amazing, all while reaching the target fps.

Cheers, guys!

As ever, we look forward to your feedback in the comments, on the forum, or via Facebook and Twitter. You can ask questions, pick up tips and tricks, and more by joining our community and the CRYENGINE development team over on our official CRYENGINE Discord channel. If you find a bug in the engine, please report it directly on GitHub, which helps us to process the issue quickly and efficiently.

Are you looking for your next career move? At Crytek, we value diversity, and we actively encourage people from all kinds of backgrounds and experience levels to apply to our open positions, so join us over at LinkedIn and check out our careers page.

Read the original article

Categories
Video Gamer News

News: Counter-Strike breaks its concurrent player record, then breaks it again the very next day

 

Video Gamer News –

News: Counter-Strike breaks its concurrent player record, then breaks it again the very next day

 

Valve’s Counter-Strike: Global Offensive recorded a whopping 854,801 concurrent players, then that record was bested the very next day with 901,681 concurrent players on February 10, 2020 (via SteamDB).

Released in 2012, the multiplayer first-person shooter still enjoys serious success, though rocked by controversies like skin betting and match fixing. To secure its competitive scene, esport organisations like Cloud9 and Dignitas have set up a team-owned league, said to mimic professional wrestling and mixed martial arts shows. The new league, named Flashpoint, boasts one of the greatest prize pools for the game with $2 million up for grabs, and it’s looking to encourage new talent and offer these players with personalised financial support. This new record for Counter-Strike: Global Offensive places it well above DOTA 2, PUBG, GTA V, and Tom Clancy’s Rainbow Six Siege

Interestingly, Valve implemented a new update for Counter-Strike: Global Offensive that mutes players who rack up abuse reports and continue to ignore warnings from the developer. The player must earn experience points until they are unmuted by the game, and even then, the players in the match with them must choose to manually unmute them. This feature was sought after by the community, and it’s possible that its numbers will climb even higher if people know they won’t be getting an earful from an especially… enthusiastic player.

Counter-Strike: Global Offensive is out now for PC. 
 

Read the original article

Categories
Unreal Engine News

A comprehensive guide to creating 360-degree game trailers using Unreal

 

Unreal Engine –

A comprehensive guide to creating 360-degree game trailers using Unreal

 

As with most creative rabbit holes, we fell into the concept of 360 trailers by asking questions. What is the best way to represent a VR experience from non-VR mediums? How can we extend our immersive expertise to our game promotions? Trailers are the most powerful promotional tool for games; can we push a trailer to be just as immersive and engaging as the VR experience itself?

We at Archiact were planning the announcement of our recent VR adventure experience, FREEDIVER: Triton Down, which is available now on Steam and the Oculus Store, and were nosing around for the best way to spread the word about this game we’d made and loved. With our 360-degree teaser trailer going live, it has not only kicked off the most successful announcement week in our studio’s history, but the video itself has shattered every video record our previous trailers had ever set.

Best of all, no fancy third-party tech or expensive program licences were needed: we were able to accomplish all of this using Unreal Engine with only a small team of three developers spending a few hours for setup, plus another day or so for render time. The workflow was amazingly smooth, and we’d love to share it with you.

If you’re wondering if 360 video might be right for your VR game promotions, this post will walk you through the technical steps within Unreal to film, record, and render your in-game content in 360 degrees. We’ll also share some of the learnings we gained along the way regarding the less-tangible side of 360 content creation, such as non-linear storytelling, viewer engagement, and more. 
 

Okay, But Why 360?

For VR developers, the problem is a familiar one: how can you accurately convey just how amazingly immersive and interactive your VR game is, when the vast majority of your marketing materials will be experienced on a 2D screen? Most of us stick to what we know, and rely on creative ways to portray a three-dimensional experience through a 2D medium, largely in the form of trailers, screenshots, and GIFs.

We knew right away that FREEDIVER needed more than that. Between the intense underwater environments and use of one-to-one gestural swimming locomotion, it was screaming for a promotional asset that matched its immersive chops: 360 video seemed like a good place to start.

There was just one catch for us: we had never made 360 content before. 
TECH-BLOG_ARCH_UE4BlogPost_IMG02.png

Who Else is Using 360 Content?

What’s the first thing you do when you’re about to do something for the first time? See what everyone else has already done!

Somewhat surprisingly, there are only a handful of VR games that have chosen the 360 format for their promotional videos. The first and most prominent example is the trailer for The Climb.

The Climb had an advantage here. Because it’s gameplay is straightforward and relatable enough, which is literally summarized by its title, they didn’t need to spend too much time establishing features or mechanics. What we noticed right away in their trailer is the sense of presence: a big part of The Climb’s appeal is the incredible vistas you’re rewarded with during and at the end of each climb, and the way this trailer is shot encourages the viewer to really look around and take in the beautiful scenery. You want to be there, right now—the act of climbing is almost secondary, and that’s okay.

Arizona Sunshine’s 360 trailer takes advantage of both space and time. While you’re whisked from scene to scene in this apocalyptic tableau, time is slowed down enough to give the impression that this is all happening at once. It gives a real sense of chaos to the view, and sets up expectations for a game that will drop you right in the middle of that intense whirlwind. Interestingly, the storytelling here is quite linear: where The Climb’s trailer rewards the viewer no matter where they choose to look, Arizona Sunshine’s trailer still focuses the action right in front of the player, and there isn’t much else to see beyond the immediate action you’re served.

Last, but not least is the 360 trailer for Psychonauts: The Rhombus of Ruin. At a runtime of 93 seconds, this trailer appears to be pre-rendered, and is almost entirely story-based. The use of binaural audio here is key, as the viewer turns their head around to explore the spaceship scene, the direction of the character’s voice will always tug them back to center. One challenge this trailer does highlight is how to tell a clear linear story in a non-linear format; without the use of voice over, there are few context clues to bring the viewer into the story and, therefore, to give them a presence in the world the developers have created. It’s a tough challenge!

What We Learned

 

  • Reward the player for looking around
  • Presence is key, gameplay less so
  • Find a way to keep your viewer’s attention centered on the most important thing, without punishing them for straying
  • In other words, treat it a lot like VR! Many of the same principles apply here.

TECH-BLOG_ARCH_UE4BlogPost_IMG06.png
What Does 360 Content Need to Succeed?

With some new knowledge under our belts, we set our goal for our own 360 trailer. In the end, we decided the focus needed to be on tone and energy: we wanted viewers to feel what it was like to be trapped inside the hull of a sinking ship, to get a taste of the nerves and the shortness of breath that the game so wonderfully douses you in.

Right away, we sketched out the following guiding elements:
 

  • Establish the scene quickly: you are on a boat!
  • Establish presence: you are a person! Look at those arms. Those are yours!
  • Establish elements of anxiety: this is not a friendly place, and the swimmer is definitely in trouble
  • Establish our highest level gameplay mechanics: swimming and oxygen management
  • Always give the viewer something interesting to look at
  • Put the viewer in danger, then give them hope, and go out with a bang

From that, our first storyboard was born: the viewer opens their “eyes” in the hull of a completely flooded ship galley. They must swim to the nearby intake hatch—past their dead shipmate’s floating body—and get enough oxygen to swim down into the vent and—hopefully—to safety.

TECH-BLOG_ARCH_UE4BlogPost_IMG07.jpg

The Hidden Challenges of 360

Armed with our storyboard and a rough script, we dove into production. Right away, we faced challenges. Some were story based: How do we direct the viewer’s eye? How long should we keep the camera in a specific location before moving on? All the others were technical. How can we even record this in the way we need? What will our in-game avatar look like when you can swivel the head around in any direction? 

With Unreal, the road to answers for these questions was much smoother than we anticipated, and even allowed for iteration to get to that perfect end result. Here are the technical steps, one by one, for you to follow along with and/or troubleshoot your existing process.

Step 1: Get Equipped

The first step in production was to get our ducks in a row. And by ducks, we mean plugins. The one we used is the [experimental] Stereoscopic Panoramic Capture plugin from Unreal. Our trailer was created using a pre-4.23 version of the plugin, so be sure to check out the notes on the new version for the most up-to-date workflow!

TECH-BLOG_ARCH_UE4BlogPostImg.jpg

With that installed, make sure that Instanced Stereoscopic Rendering is OFF in the Project Settings.

TECH-BLOG_ARCH_UE4BlogPost_IMG10.jpg

Restart the Editor for the changes to take effect. Add the following execute console command nodes right after the Begin Play Event node. Now you’re ready to load up your scene!

TECH-BLOG_ARCH_UE4BlogPost_IMG11.jpg

Step 2: VR Mo-Cap? Easier Than It Sounds!

Since presence is key, we absolutely needed to have the arms of the main character (Ren Tanaka) in every shot. That meant essentially performing motion capture inside the game itself. Sequencer was the best tool for this job, and we used it to record gameplay.

In the FREEDIVER project, our base character is spawned only when the user plays the game. In order for Sequence Recorder to tangibly handle the base character, we needed to set the GameMode Override to “GameMode” in the World Settings.

TECH-BLOG_ARCH_UE4BlogPost_IMG12.jpg

Next, we added (dragged in) the BaseCharacter manually into the level.

TECH-BLOG_ARCH_UE4BlogPost_IMG13.jpg

We selected the BaseCharacter in the level and set its Auto Possess Player property to ‘Player 0’.

TECH-BLOG_ARCH_UE4BlogPost_IMG14.jpg

Once we had those set, the BaseCharacter took inputs from the controller and could then be used to record Sequencer animation. From there, we open up the Sequence Recorder window…

TECH-BLOG_ARCH_UE4BlogPost_IMG15.jpg

…and selected the BaseCharacter and pressed the Add button at the top of the Sequence Recorder window.

TECH-BLOG_ARCH_UE4BlogPost_IMG16.jpg

Now it’s time to dive into the motion capture! Launch the Game in VR mode…

TECH-BLOG_ARCH_UE4BlogPost_IMG17.jpg

…then press Shift+F1 to focus out of the VR window and get back to the Editor while the VR mode is still playing. Press the Record button at the top of the Sequence Recorder window.

TECH-BLOG_ARCH_UE4BlogPost_IMG18.jpg

Click back on the VR window to focus on it. You should be able to resume control over the character and camera. You should also see a countdown overlay indicating that the recording will start in 4, 3, 2, 1 seconds.

Lights, camera, action! Once the recording begins, move about the virtual world and perform your actions as you planned. Remember, every action, from head movement to controller inputs, will be recorded as Sequencer animations, so don’t forget to act the part from head to toe.

TECH-BLOG_ARCH_UE4BlogPost_IMG19.gif

Tips for VR Mo-Cap:

 

  • Don’t be afraid of multiple takes. Just like capturing live action, it will take a few run-throughs to get everything right.
  • Keep your head as steady as you can. Avoid swinging around wildly.
  • The viewer needs two to three seconds to fully focus on a new object or action.
  • If you don’t have a player avatar with visible hands, definitely consider it! We were amazed by how much character and storytelling was possible through Ren Tanaka’s gestures. 
  • Exaggerate: tiny motions may not register in the final animation sequence, so keep your arms up high and wide, and your movements slow.

Once the performance is wrapped, press Shift+F1 again to defocus out of the VR window in order to get back to the Editor and stop the recording in the Sequence Recorder window. A recorded sequence will then be created.

TECH-BLOG_ARCH_UE4BlogPost_IMG20.jpg

Open up that sequence to see the animation track contents.

TECH-BLOG_ARCH_UE4BlogPost_IMG21.jpg

You can inspect the character body animation by right clicking on the SM_VRPlayer animation track properties and double-clicking on the recorded animation asset.

TECH-BLOG_ARCH_UE4BlogPost_IMG22.jpg

Step 3: Fine-tuning Your Animation

While our method of motion capture was effective in recording the essence of the player’s movement through the scene—timing, head movement, and general placement/interaction of hands—no IK system is perfect, and there will likely always be room for improvement in the resulting animation.

Because this teaser trailer would be the VR community’s first ever glimpse of what FREEDIVER has to offer, we wanted the animation to be perfect. It’s more than just a quality issue; it’s also a question of storytelling. First-person footage from a VR game is indistinguishable from first-person 2D game footage, unless you have significant player presence in the form of hands/interaction fidelity.

In order to tell the story of Ren’s underwater struggle for survival in the 30 seconds or so we had to tell it, ensuring that her hands were believable “actors” in their movement and interactions was key.

TECH-BLOG_ARCH_UE4BlogPost_IMG23.jpg

Thanks to the Sequencer, you can export the FBX files from your gameplay motion inputs directly to the animation software of your choice. In our case, that was Maya, which our animator used to fine-tune the animation keyframes written [in] Sequencer.

Remember: When you export to the animation software, you are only going to see the player avatar, and not the game world/objects around it. Since this makes orientation and fine interactions difficult, we recommend recording an additional 2D render from the viewpoint of your player avatar in Unreal, and attaching it to the head bones of your animating character.

TECH-BLOG_ARCH_UE4BlogPost_IMG24.gif

Step 4: Final Tweaks & Export

Reimport your polished animations into Sequencer. Now, you have the chance to make any final tweaks to your in-game world. You can add lighting to guide your viewer’s eye, nudge in-game objects to better fit the frame, and even hand-animate moving elements for exact timing. This was by far one of the most useful steps in the process, and Unreal gives you the flexibility to make changes as you need.

Remember: If you want to have non-in-game visual elements appear in your 360 video (such as a logo in the corner, legal text, etc.) now is the best time to add them. This way, they won’t be “stuck” to the viewer’s gaze like a sticker on their virtual eyeball, which is a distraction and instant immersion-killer. If a logo “floats” along with the viewer, but remains in place while they look around, it’s much less intrusive.

When you’re ready to capture, all that’s left is to play in StandAlone Game mode, and render out the sequence as it unfolds exactly how you wanted it.

TECH-BLOG_ARCH_UE4BlogPost_IMG24.jpg

You can select the resolution your render will output as; for ultra-crisp video in 360, we’d recommend 8K, or 4K minimum. 360 images for both the left and right eye view will be rendered out and saved individually.

Remember: Because the render outputs as 360 stills in sequence, there will be no attached audio. In order to capture your in-game soundscapes organically, use a screen recorder such as OBS or ShadowPlay to record the Sequencer events independently, then import to your editor later.

TECH-BLOG_ARCH_UE4BlogPost_IMG25.jpg

Next, fire up your linear editing software of choice and import the images as a sequence. From here, you can color correct as needed, and render out the final master file in the desired video format.

VLC player can play this new 8K 360 panoramic movie format, with complete click-and-drag gaze control.

Step 5: Editing Your 360 Footage

This is it: you’re finally ready to take your master 360 files and edit them into your trailer, or whatever video asset you’re creating. 360 files will behave just like 2D files in most editing software, so simply arrange the sequences as needed, color correct them, add transitions and title cards, and render out your final. If you recorded the game audio separately, this is the time to add that back in.

Quick Tips for Editing in 360:
 

  • Because your footage will be at least 4K, you will likely need a beefy PC to handle the render.
  • You can add 2D elements such as text and still image files, but they must be projected in 360/VR-mode in order to avoid severe distortion when rendered in 360, as seen below. (Many editing suites have this function built in; otherwise, you should be able to find a plugin to handle your projections.)
  • Not all graphics cards are equipped to render video effects in 360. Ensure you have a supported graphics card and update your drivers.
  • As mentioned above, avoid overlaying text or graphics in layers directly over the 360 video, as they will remain static and become a severe distraction in the “corner” of your viewer’s virtual eye. If you want to have a logo permanently on-screen, for instance, add it to the game world and attach it to the player avatar instead.

TECH-BLOG_ARCH_UE4BlogPost_IMG26.jpg

Step 6: Audio

For us, we knew that audio was going to be paramount to the success of this trailer, and that meant we left it to the audio experts!

The audio for FREEDIVER was created by Interleave, and was designed to be as realistic and immersive as possible. Instrumentation is meant to function organically with the ship’s sounds, since the ship is a primary character in FREEDIVER, which sings and speaks through its sinking. Instead of approaching the score with traditional brass or strings, the audio designer and composer settled on a music direction where the ship became the actual source of tone and tension throughout the score. They even rubbed different sustained frequencies against each other based on the user’s input as a way of enhancing the tension the user would feel underwater.
TECH-BLOG_ARCH_UE4BlogPost_IMG26.png
Melodies were conjured by manipulating sounds like dry ice placed in large ventilation shafts, and string instrument bows were used on different densities of metal in software samplers. When designing the ship’s large impacts and huge metallic whines from the ship, Interleave tuned them to work together with the goal of blending them with the music, making it difficult to distinguish one from the other. In-game, they also played with this idea of blending music and sound design further: for instance, the pause menu’s swelling metal sounds playback in 3D, with randomized choices of samples and volume and position moving around the listener’s head.

The teaser provided the additional challenge of hitting many emotional beats in a short time span. At the beginning, the viewer hears the uncertain whines of the ship which crescendo into large haunting blasts, as Ren Tanaka struggles to get to the air pocket in the hatch. As she submerges, the music shifts and builds into a more triumphant and courageous section. The percussion and bass kick off with a rising heartbeat, settling down to begin a shift to an increasingly panicked heartbeat aligned with Ren’s fight to survive. 

TECH-BLOG_ARCH_UE4BlogPost_IMG29.PNG

The teaser sound effects were edited to picture primarily from gameplay assets and then tweaked and sweetened. Interleave wanted a strong contrast between the air and water-filled environments, so they used a tonal contrast as well as a perceptual one. Plugins were used to add credible space to the air environments and to keep the underwater environment intimate. They also smeared the position of underwater sounds to accent sound wave speed differences in the two mediums, and many of the sounds were positioned using 360° surround tools.

In the end, Interleave supplied us with impeccable 5:1 surround sound audio, as well as 2-channel files in case we were uploading the final video somewhere without 5:1 support. The trailer’s final form is a haunting, powerful representation of the FREEDIVER experience, and the audio plays a huge role in telling the story of Ren Tanaka’s plight.

TECH-BLOG_ARCH_UE4BlogPost_IMG27.gif

Step 7: Rendering & Upload

Once you’ve obtained the audio and striped it under your visuals, it’s time to kick off your final renders. Here are the exact settings we used for our two master renders: one for 5:1 surround, and one for 2-channel.

Video Settings

Format: H.264 with MP4 wrapper
Width: 4096
Height: 2048
Frame Rate: 60
Field Order: Progressive
Aspect: Square Pixels
TV Standard: NTSC
Performance: Software Encoding
Profile: Main
Level: 5.2
Bitrate Encoding: VBR, 2 Pass
Target Bitrate [Mbps]: 30
Maximum Bitrate [Mbps]: 35
Video is VR: YES
Frame Layout: Monoscopic
Horizontal Field of View: 360
Vertical Field of View: 180
 

Audio Settings

Audio Format: AAC
Audio Codec: AAC
Sample Rate: 48,000 Hz
Channels: 5.1 (or 2-Ch)
Quality: High
Bitrate [Mbps]: 320

Remember: Many video hosts, including Steam, do not support 5:1 sound, and will crunch it down to a messy 2-channel format. To avoid unexpected results, take a page from our book and have a custom 2-channel audio file rendered out for this exact purpose.

Now that you have your final renders (hooray!), it’s time to upload to the video host of your choosing. YouTube, Vimeo, and Facebook all support fully interactive 360 content, but uploads can take a very long time to process, so be sure to give yourself plenty of time before the big reveal.

Once your file has processed, take a moment to quickly double check that the video can play in 360 and is interacting as you expect, then share that awesome creation with the world!

Thanks for reading! If you create your own 360 content using this guide, go ahead and share it with us via Twitter (tag us @ArchiactVR and @UnrealEngine) so we can see all your hard work.

https://www.unrealengine.com/tech-blog/a-comprehensive-guide-to-creating-360-degree-game-trailers-using-unreal