In my game, the player is visually represented by a model, and internally represented by a kinematic body (KB). A camera sits behind the player and this can orbit around the KB.
The camera orbiting functionality is created using a set of child nodes set to the top level of the KB (making the camera-related transforms independent of the transforms of the parent KB). As the transform of the KB changes (via the player moving the character), the camera pivot’s global transform is updated to set the origin to the head of the character. If the player chooses to control the camera, then the transform of the KB remains the same, but the local (and therefore global) transform of the camera will change to reflect how the player has controlled the camera: left mouse move - orbit camera left, right mouse move - orbit camera right.
If the player presses forward, the character will move in directions based on the orientation of the camera. The character will always move forward where the forward direction is based off the camera, not the character’s forward direction itself.
If the player lands on a sloped half-pipe type surface, then the KB and model need to orientate to the normal of this surface. This creates three problems: aligning the KB to the surface normal, aligning the model to the KB, and complicated the camera’s orbit around the KB.
Aligning the KB to the surface normal
To calculate the surface normal, the initial solution was to get the surface normal using the in-built get_floor_normal of the KB, which if the KB is touching the ground, the floor normal will be returned.
However, this quickly proved insufficient as when the player fall onto a half-pipe, they would slide down rather than align where they landed.
To solve this, I added a raycast component to the player which was a bit longer down than the player’s feet such that it could still detect floors even if the player wasn’t on the floor. I then iterate through all the slide collisions after the character moves, and if any of them belong to an object that is tagged as a half-pipe, the character’s transform will slightly align with the normal of the collision of that raycast.
First a function is defined to align a transform with a new up vector:
func align_with_y(xform: Transform, new_y: Vector3):
xform.basis.y = new_y
xform.basis.x = -xform.basis.z.cross(new_y)
xform.basis = xform.basis.orthonormalized()
return xform
And then the global transform of the KB is interpolated to this new aligned transform:
var new_xform = align_with_y(global_transform, collision.normal)
global_transform = global_transform.interpolate_with(new_xform, 2 * delta)
Aligning the model to the KB
The second problem relates to the aligning of the model to the KB. The easiest way to achieve this is to have the model as a child of the KB, so as the KB’s transform changes the model’s local transform will reflect the changes (position and rotation). This proved an insufficient solution as the model was locked to the orientation of the KB - as the camera was rotated, the KB and the model would rotate, meaning that if the player adjusted the camera the model too would rotate around. At all times, the player would be looking at the back of the model there was no way to rotate the camera to see another angle of the model. The model, KB and camera were tightly coupled together.
The solution was to separate the model and the KB, rather than the model being a child of the KB it would be set as the same level. This means a little extra code to reposition the model to the KB’s position whenever the KB moves.
On flat surfaces this was fairly straightforward to achieve. Every game loop, set the model’s transform origin to be the same the KB’s, and set the model’s transform basis to be the same as the KB’s but adjusting for the y rotation of the camera. In pseudo-code:
process():
if camera_moved_left:
model_y_rotation--
if camera_moved_right:
model_y_rotation++
model.global_transform.origin = kb.global_transform.origin
model.global_transform.basis = kb.global_transform.basis
model.rotate_around(Vector3.UP, model_y_rotation)
However, this only worked on flat surfaces. If character was standing on a non-flat surface the up vector of the character would no longer be Vector3.UP, but rather the up vector of the KB.
I initially wrote a solution using quaternions, but having just wrote the sentence above I have come up with a much more obvious solution. In pseudo-code:
model.rotate_around(kb.global_transform.basis.y, model_y_rotation)
Note: In Godot kb.global_transform.basis.y
returns the up vector of a transform.
Camera orbiting around the KB up vector
While writing this post I realised that there was a bug in how the camera orbited the player when they were on a sloped surface. The camera uses the y rotation of the KB to orbit, however, if the player is aligned to a slope the y rotation is no longer around the up vector [0, 1, 0], but around the KB’s transform’s up vector, in the screenshot below the KB’s up vector is [0, 0.0981, 0.995]
In this situation using the KB’s y rotation as the target angle for the camera sort of works - the camera will still orbit around the player in a circular motion, however, it isn’t a linear movement. Displayed in the gif below you can see that the camera movement isn’t linear, it will orbit slowly around some angles and then suddenly fast and back to slow again.
I could tell in my mind what the solution was get the local y rotation around the character. But this required hours and hours of trial and error, a Stack Overflow answer, and an implementation of a function Unity has, but Godot doesn’t.
My first solution half solved the problem, using the identity transform, I used the align_with_y function to align it to the KB’s up vector, and then get the angle between the resultant transform’s rotation quaternion and the KB’s transform’s rotation quaternion. In code:
var xform = align_with_y(Transform.IDENTITY, global_transform.basis.y)
var q = xform.basis.get_rotation_quat()
var angle = q.angle_to(global_transform.basis.get_rotation_quat())
However, this angle only worked across 180 degrees, it seemed like the quaternion’s angle to method would only find the shortest angle between the two (there’s probably a more mathematical explanation).
Doing a lot of searching on Google, I finally found an answer which looked like it - finding the signed axis angle between two quaternions. However, this block of code relied on Unity’s Quaternion.ToAngleAxis which Godot didn’t implement, and unfortunately Unity is closed source. Fortunately I found this webpage describing how to implement it, so I did:
func quat_to_angle_axis(q1: Quat):
var axis = Vector3.ZERO
if q1.w > 1:
q1 = q1.normalized()
var angle = 2 * acos(q1.w)
var s = sqrt(1-q1.w*q1.w)
if s < 0.001:
axis.x = q1.x
axis.y = q1.y
axis.z = q1.z
else:
axis.x = q1.x / s
axis.y = q1.y / s
axis.z = q1.z / s
return [ angle, axis ]
And then the implementation of the get_signed_angle
function:
func get_signed_angle(a: Quat, b: Quat, axis: Vector3):
var q = b * a.inverse()
var res = quat_to_angle_axis(q)
var angle = res[0]
var angle_axis = res[1]
if axis.angle_to(angle_axis) > deg2rad(90.0):
angle = -angle
# Note: Impl in SO uses Unity’s Mathf.DeltaAngle(0f, angle);
return angle
And finally using this new signed angle when setting the camera’s rotation:
var xform = align_with_y(Transform.IDENTITY, global_transform.basis.y)
var q = xform.basis.get_rotation_quat()
var q2 = global_transform.basis.get_rotation_quat()
var angle = get_signed_angle(q, q2, global_transform.basis.y)
Which worked perfectly:
Super informative, looking forward to the next!