Some of the tools, extensions, blogs and GitHub repositories that I rely on.
Some of the tools, extensions, blogs and GitHub repositories that I rely on.
The solution was found by another user on the Slack channel and it worked for me, so I wanted to pass it along in case someone else is having the same issue.
In this tutorial, we will be creating an object collection. We will not be any custom scripts (we will only utilize built in MRTK scripts). We will create a new material, a new prefab for the objects arranged by the object collection script, and an object hierarchy to contain the object collection.
The Grid Object Collection and Scatter Object Collection (scripts) allow us to easily layout many child game objects at once.
Save the project.
Run the project in the Unity Editor.
Save your scene.
Before we begin with this tutorial, follow all of the prerequisite steps outlined in this blog post: MRTK RC1 v2.0.0 Tutorial – Common Steps – Table of Contents
Even though this tutorial is similar to the first tutorial published on this blog, I suggest you start from scratch and follow the prerequisites faithfully. If you do this now, future tutorials will be easier to add onto the base project we are creating.
We will be creating a new scene, and within that scene we will create a cube that the user can interact with and, when the user focuses on the cube, it highlights the cube. We will then make the cube a prefab so we can reuse the interactable item in future tutorials.
If you have previously followed the prerequisite steps above, you can simply create a new scene in the project by opening the Base_Demo_Scene and use “Save As” to giving it the name Tutorial_02. Ensure you save this scene in the Assets/App/Content/Scenes folder.
At this point if you have successfully followed this tutorial, if you click run the Unity Editor you should see something similar to the YouTube video shown here: MRTK vNext RC1 Tutorial 02 – Video 01
using Microsoft.MixedReality.Toolkit; using Microsoft.MixedReality.Toolkit.Input; using Microsoft.MixedReality.Toolkit.Physics; using Microsoft.MixedReality.Toolkit.Utilities; using System.Collections; using System.Collections.Generic; using UnityEngine; /// <summary> /// Component that allows dragging a <see cref="GameObject"/>. /// Dragging is done by calculating the angular delta and z-delta between the current and previous hand positions, /// and then repositioning the object based on that. /// </summary> public class DragAndDropHandler : BaseFocusHandler, IMixedRealityInputHandler<MixedRealityPose>, IMixedRealityPointerHandler, IMixedRealitySourceStateHandler { private enum RotationModeEnum { Default, LockObjectRotation, OrientTowardUser, OrientTowardUserAndKeepUpright } [SerializeField] [Tooltip("The action that will start/stop the dragging.")] private MixedRealityInputAction dragAction = MixedRealityInputAction.None; [SerializeField] [Tooltip("The action that will provide the drag position.")] private MixedRealityInputAction dragPositionAction = MixedRealityInputAction.None; [SerializeField] [Tooltip("Transform that will be dragged. Defaults to the object of the component.")] private Transform hostTransform; [SerializeField] [Tooltip("Scale by which hand movement in Z is multiplied to move the dragged object.")] private float distanceScale = 2f; [SerializeField] [Tooltip("How should the GameObject be rotated while being dragged?")] private RotationModeEnum rotationMode = RotationModeEnum.Default; [SerializeField] [Range(0.01f, 1.0f)] [Tooltip("Controls the speed at which the object will interpolate toward the desired position")] private float positionLerpSpeed = 0.2f; [SerializeField] [Range(0.01f, 1.0f)] [Tooltip("Controls the speed at which the object will interpolate toward the desired rotation")] private float rotationLerpSpeed = 0.2f; /// <summary> /// Gets the pivot position for the hand, which is approximated to the base of the neck. /// </summary> /// <returns>Pivot position for the hand.</returns> private Vector3 HandPivotPosition => CameraCache.Main.transform.position + new Vector3(0, -0.2f, 0) - CameraCache.Main.transform.forward * 0.2f; // a bit lower and behind private bool isDragging; private bool isDraggingEnabled = true; private bool isDraggingWithSourcePose; // Used for moving with a pointer ray private float stickLength; private Vector3 previousPointerPositionHeadSpace; // Used for moving with a source position private float handRefDistance = -1; private float objectReferenceDistance; private Vector3 objectReferenceDirection; private Quaternion gazeAngularOffset; private Vector3 objectReferenceUp; private Vector3 objectReferenceForward; private Vector3 objectReferenceGrabPoint; private Vector3 draggingPosition; private Quaternion draggingRotation; private Rigidbody hostRigidbody; private bool hostRigidbodyWasKinematic; private IMixedRealityPointer currentPointer; private IMixedRealityInputSource currentInputSource; // If the dot product between hand movement and head forward is less than this amount, // don't exponentially increase the length of the stick private readonly float zPushTolerance = 0.1f; #region MonoBehaviour Implementation private void Start() { if (hostTransform == null) { hostTransform = transform; } hostRigidbody = hostTransform.GetComponent<Rigidbody>(); } private void OnDestroy() { if (isDragging) { StopDragging(); } } #endregion MonoBehaviour Implementation #region IMixedRealityPointerHandler Implementation void IMixedRealityPointerHandler.OnPointerUp(MixedRealityPointerEventData eventData) { if (!isDraggingEnabled || !isDragging || eventData.MixedRealityInputAction != dragAction || eventData.SourceId != currentInputSource?.SourceId) { // If we're not handling drag input or we're not releasing the right action, don't try to end a drag operation. return; } eventData.Use(); // Mark the event as used, so it doesn't fall through to other handlers. StopDragging(); } void IMixedRealityPointerHandler.OnPointerDown(MixedRealityPointerEventData eventData) { if (!isDraggingEnabled || isDragging || eventData.MixedRealityInputAction != dragAction) { // If we're already handling drag input or we're not grabbing, don't start a new drag operation. return; } eventData.Use(); // Mark the event as used, so it doesn't fall through to other handlers. currentInputSource = eventData.InputSource; currentPointer = eventData.Pointer; FocusDetails focusDetails; Vector3 initialDraggingPosition = MixedRealityToolkit.InputSystem.FocusProvider.TryGetFocusDetails(currentPointer, out focusDetails) ? focusDetails.Point : hostTransform.position; isDraggingWithSourcePose = currentPointer == MixedRealityToolkit.InputSystem.GazeProvider.GazePointer; StartDragging(initialDraggingPosition); } void IMixedRealityPointerHandler.OnPointerClicked(MixedRealityPointerEventData eventData) { } #endregion IMixedRealityPointerHandler Implementation #region IMixedRealitySourceStateHandler Implementation void IMixedRealitySourceStateHandler.OnSourceDetected(SourceStateEventData eventData) { } void IMixedRealitySourceStateHandler.OnSourceLost(SourceStateEventData eventData) { if (eventData.SourceId == currentInputSource?.SourceId) { StopDragging(); } } #endregion IMixedRealitySourceStateHandler Implementation #region BaseFocusHandler Overrides /// <inheritdoc /> public override void OnFocusExit(FocusEventData eventData) { if (isDragging) { StopDragging(); } } #endregion BaseFocusHandler Overrides /// <summary> /// Enables or disables dragging. /// </summary> /// <param name="isEnabled">Indicates whether dragging should be enabled or disabled.</param> public void SetDragging(bool isEnabled) { if (isDraggingEnabled == isEnabled) { return; } isDraggingEnabled = isEnabled; if (isDragging) { StopDragging(); } } /// <summary> /// Starts dragging the object. /// </summary> private void StartDragging(Vector3 initialDraggingPosition) { if (!isDraggingEnabled || isDragging) { return; } Transform cameraTransform = CameraCache.Main.transform; currentPointer.IsFocusLocked = true; isDragging = true; if (hostRigidbody != null) { hostRigidbodyWasKinematic = hostRigidbody.isKinematic; hostRigidbody.isKinematic = true; } if (isDraggingWithSourcePose) { Vector3 pivotPosition = HandPivotPosition; objectReferenceDistance = Vector3.Magnitude(initialDraggingPosition - pivotPosition); objectReferenceDirection = cameraTransform.InverseTransformDirection(Vector3.Normalize(initialDraggingPosition - pivotPosition)); } else { Vector3 inputPosition = currentPointer.Position; //currentPointer.TryGetPointerPosition(out inputPosition); previousPointerPositionHeadSpace = cameraTransform.InverseTransformPoint(inputPosition); stickLength = Vector3.Distance(initialDraggingPosition, inputPosition); } // Store where the object was grabbed from objectReferenceGrabPoint = cameraTransform.transform.InverseTransformDirection(hostTransform.position - initialDraggingPosition); // in camera space objectReferenceForward = cameraTransform.InverseTransformDirection(hostTransform.forward); objectReferenceUp = cameraTransform.InverseTransformDirection(hostTransform.up); draggingPosition = initialDraggingPosition; } /// <summary> /// Stops dragging the object. /// </summary> private void StopDragging() { if (!isDragging) { return; } currentPointer.IsFocusLocked = false; isDragging = false; handRefDistance = -1; if (hostRigidbody != null) { hostRigidbody.isKinematic = hostRigidbodyWasKinematic; } } #region IMixedRealityInputHandler<MixedRealityPose> Implementation void IMixedRealityInputHandler<MixedRealityPose>.OnInputChanged(InputEventData<MixedRealityPose> eventData) { if (eventData.MixedRealityInputAction != dragPositionAction || !isDraggingEnabled || !isDragging || eventData.SourceId != currentInputSource?.SourceId) { return; } Transform cameraTransform = CameraCache.Main.transform; Vector3 pivotPosition = Vector3.zero; if (isDraggingWithSourcePose) { Vector3 inputPosition = eventData.InputData.Position; pivotPosition = HandPivotPosition; Vector3 newHandDirection = Vector3.Normalize(inputPosition - pivotPosition); if (handRefDistance < 0) { handRefDistance = Vector3.Magnitude(inputPosition - pivotPosition); Vector3 handDirection = cameraTransform.InverseTransformDirection(Vector3.Normalize(inputPosition - pivotPosition)); // Store the initial offset between the hand and the object, so that we can consider it when dragging gazeAngularOffset = Quaternion.FromToRotation(handDirection, objectReferenceDirection); } // in camera space newHandDirection = cameraTransform.InverseTransformDirection(newHandDirection); Vector3 targetDirection = Vector3.Normalize(gazeAngularOffset * newHandDirection); // back to world space targetDirection = cameraTransform.TransformDirection(targetDirection); float currentHandDistance = Vector3.Magnitude(inputPosition - pivotPosition); float distanceRatio = currentHandDistance / handRefDistance; float distanceOffset = distanceRatio > 0 ? (distanceRatio - 1f) * distanceScale : 0; float targetDistance = objectReferenceDistance + distanceOffset; draggingPosition = pivotPosition + (targetDirection * targetDistance); } else { pivotPosition = cameraTransform.position; Vector3 pointerPosition = currentPointer.Position; ; //currentPointer.TryGetPointerPosition(out pointerPosition); Ray pointingRay = currentPointer.Rays[0]; ; //currentPointer.TryGetPointingRay(out pointingRay); Vector3 currentPosition = pointerPosition; Vector3 currentPositionHeadSpace = cameraTransform.InverseTransformPoint(currentPosition); Vector3 positionDeltaHeadSpace = currentPositionHeadSpace - previousPointerPositionHeadSpace; float pushDistance = Vector3.Dot(positionDeltaHeadSpace, cameraTransform.InverseTransformDirection(pointingRay.direction.normalized)); if (Mathf.Abs(Vector3.Dot(positionDeltaHeadSpace.normalized, Vector3.forward)) > zPushTolerance) { stickLength = DistanceRamp(stickLength, pushDistance); } draggingPosition = pointingRay.GetPoint(stickLength); previousPointerPositionHeadSpace = currentPositionHeadSpace; } switch (rotationMode) { case RotationModeEnum.OrientTowardUser: case RotationModeEnum.OrientTowardUserAndKeepUpright: draggingRotation = Quaternion.LookRotation(hostTransform.position - pivotPosition); break; case RotationModeEnum.LockObjectRotation: draggingRotation = hostTransform.rotation; break; default: // in world space Vector3 objForward = cameraTransform.TransformDirection(objectReferenceForward); // in world space Vector3 objUp = cameraTransform.TransformDirection(objectReferenceUp); draggingRotation = Quaternion.LookRotation(objForward, objUp); break; } Vector3 newPosition = Vector3.Lerp(hostTransform.position, draggingPosition + cameraTransform.TransformDirection(objectReferenceGrabPoint), positionLerpSpeed); // Apply Final Position if (hostRigidbody == null) { hostTransform.position = newPosition; } else { hostRigidbody.MovePosition(newPosition); } // Apply Final Rotation Quaternion newRotation = Quaternion.Lerp(hostTransform.rotation, draggingRotation, rotationLerpSpeed); if (hostRigidbody == null) { hostTransform.rotation = newRotation; } else { hostRigidbody.MoveRotation(newRotation); } if (rotationMode == RotationModeEnum.OrientTowardUserAndKeepUpright) { Quaternion upRotation = Quaternion.FromToRotation(hostTransform.up, Vector3.up); hostTransform.rotation = upRotation * hostTransform.rotation; } } #endregion IMixedRealityInputHandler<MixedRealityPose> Implementation #region Private Helpers /// <summary> /// Gets the pivot position for the hand, which is approximated to the base of the neck. /// </summary> /// <remarks> /// An exponential distance ramping where distance is determined by: /// f(t) = (e^At - 1)/B /// where: /// A is a scaling factor: how fast the function ramps to infinity /// B is a second scaling factor: a denominator that shallows out the ramp near the origin /// t is a linear input /// f(t) is the distance exponentially ramped along variable t /// /// Here's a quick derivation for the expression below. /// A = constant /// B = constant /// d = ramp(t) = (e^At - 1)/B /// t = ramp_inverse(d) = ln(B*d+1)/A /// In general, if y=f(x), then f(currentY, deltaX) = f( f_inverse(currentY) + deltaX ) /// So, /// ramp(currentD, deltaT) = (e^(A*(ln(B*currentD + 1)/A + deltaT)) - 1)/B /// simplified: /// ramp(currentD, deltaT) = (e^(A*deltaT) * (B*currentD + 1) - 1) / B /// </remarks> private static float DistanceRamp(float currentDistance, float deltaT, float A = 4.0f, float B = 75.0f) { return (Mathf.Exp(A * deltaT) * (B * currentDistance + 1) - 1) / B; } #endregion Private Helpers }
The goal of this tutorial is to create a scene with an environment consisting of a large cube (the floor) textured with the Sand material, that allows the user movement within the environment. We will also add some ambient audio to the scene.
Create a scene named Base_Demo_Scene and save it to the Assets/App/Content/Scenes folder.
Select the Mixed Reality Toolkit menu and select Add to Scene and Configure.
Run the scene from the Unity Editor. If you followed all of the steps correctly, you should hear music playing, and you will appear in the center of the scene with sand under your feet. You should be able to teleport around in the scene using the WMR controllers.
After confirming the behavior above, first drag the Terrain game object from the scene into the Assets/App/Content/Prefabs folder. Then do the same thing for the Environment game object. This creates prefabs for you and they can be used in other scenes with minimal effort. NOTE: The order in which you create the prefabs is important.
Your scene in the hierarchy window should look like that shown in figure 01 below. If it does not review this tutorial to ensure you followed all of the steps accurately.
What we just did is setup an area for the VR user to move around in. We added an audio clip to the Environment game object that begins playing as soon as the VR scene is initialized and continuously loops until we exit the application. This provides audio feedback to the user that the scene is loaded, active, and running . We also added a simple, flat, 100 meter by 100 meter floor so the user can teleport around in the environment. We added the Sand material to FloorPanel game object give the user better visual feedback on movement. If we had just assigned a solid color material to the floor, it would be difficult for the user to visually detect subtle movement within the environment. Additionally we dropped the Environment game object to 1 meter below the normal level. (this is because my system, for some reason, routinely spawns me into the floor, at about chest height, when I don’t do this).
Additionally, we created an empty game object named SceneContent that will act as the root for all scene game objects other than those added by the MRTK configuration (items shown in Figure 05). We do this when using the MRTK (and I believe this applies to all VR applications) because unlike most Unity 3D scenes, you don’t move the player and/or their attached camera to move around in the scene. In a VR application context, instead of moving the player/camera to move around, you move the scene around the player/camera. This root object is used to simplify the moving all game objects in the scene at one go. To make this even more explicit, place all interactive visuals, spawn points, etc. in the scene as children of the SceneContent root game object.
The goal of this tutorial is to create a clear separation between your unity assets and unity assets imported from other sources.
Create the asset folder structure shown in figure 01.
The purpose of the hierarchy of folders you are creating is to allow us to keep our custom game objects and custom scripts separate from unity assets imported via the Unity Asset Store, and other sources such as GitHub. This creates a clear demarcation between our code and code from a foreign source.
The goal of this tutorial is to import free unity assets that can be used in the application we will be creating. If you do not know how to import unity assets from the unity store follow the tutorial “Using the Asset Store”
https://unity3d.com/learn/tutorials/topics/asset-store/using-asset-store?playlist=17132
Import these free assets from the Unity Asset Store:
And then save your project.
The at least some of the assets we just imported will be used in all future tutorials. We will use the “Sand” material from the “Lowpoly Wasteland Props” asset package. We will apply that material to the walking surface that the user will move around on. The material has a subtle texture that allows the user experience better visual feedback on head tracking and movement within a sparsely populated environment. We will use a ambient sound loop (background sound) to indicate to the user that the scene is loaded, running, and ready. In later tutorials we will incorporate more assets from the “Lowpoly Wasteland Props” asset package.
We will create a Unity3D application and setup the MRTK v2.0.0 RC1 Release package in your Unity application.
Go to GitHub to download the unity packages (*.unitypackage) for the Mixed Reality Toolkit (MRTK) v2.0.0 RC1 Release located here: https://github.com/Microsoft/MixedRealityToolkit-Unity/releases
Create a Unity Project named MRTK-RC1-Demo-02. I have found the issues in loading the MRTK assets are minimized if you change the build setting to output a Universal Windows Platform (UWP) application, and change the player settings/other settings to API Compatibility Level to .NET Standard 2.0 before importing the MRTK assets. The figure 01 and figure 02 below indicate the settings I have modified.
Select menu item: Assets/Import Package/Custom Package… to import the MRTK packages you already downloaded:
After successfully importing the MRTK packages you should have the assets shown in figure 03. At this point save your project.
Figure 03 – Assets imported via the MRTK unity packages.
The motivation for breaking the tutorial into separate mini-tutorials is to allow me to start new tutorials with these prerequisites. If you have already completed these steps for a previous tutorial, you can skip these steps if you wish and just create a new scene in the previously created project.
The unity packages (*.unitypackage) for the MRTK v2.0.0 RC1 Release located here: https://github.com/Microsoft/MixedRealityToolkit-Unity/releases
have an issue building Universal Windows Platform (UWP) output for users not enrolled in the Windows Insider Program. The issue revolves around the operating system updates for the HoloLens 2 coming out in late May 2019. Running the application within the Unity Editor works (minus some new functionality around the HoloLens 2 hand manipulations), but compiling and deploying to Windows 10 will fail.
You can get more information here: https://stackoverflow.com/questions/55601074/since-the-new-version-of-mrtk-i-cant-build-a-scene/55602238#55602238