Jump to: Board index » General » Fusion

Fusion has me completely befuddled

Learn about 3D compositing, animation, broadcast design and VFX workflows.
  • Author
  • Message
Offline

NeilMyers

  • Posts: 15
  • Joined: Thu Apr 22, 2021 11:35 pm
  • Real Name: Neil Myers

Fusion has me completely befuddled

PostSun May 05, 2024 12:18 am

I'm a 15 year Adobe user. I am an expert in Premiere, After Effects and Photoshop. I am competent in Audition.

I came over to Resolve for two reasons:

1) Stability (Adobe should be embarrassed at how unstable their products are AND ALWAYS HAVE BEEN).

2) Color correction/grading. I produced a documentary and HATED having to port a clean copy of the picture-locked doc to a consultant, where I spent $12K to have her color correct and grade the doc. She was awesome, but I would prefer to do my own grading, but Adobe doesn't hold a candle to DaVinci for color.

Until now it has been wonderful. I love the way Resolve edits, and Color is amazing. But now I am in Fusion and I am having a lot of trouble.

By way of background, I am highly experienced in After Effects -- the concept of compositing and animating pixels doesn't confuse me. But after two days of reading, online training, and YouTube classes, I'm ready to give up.

My task seems relatively easy. I have drone footage circling a building. I have still images I want to composite onto the roofs of the buildings. I needed three simple things:

1. A way to warp the still image into the right perspective

2. A way to animate the perspective over time since it changes.

3) A way to build some masks:

a) A mask so the image ONLY appears on the roof

b) A mask for extraneous things that should block the still images (trees, other buildings, etc.

At first I was super exited about Planar Tracker -- that seemed perfect! But I couldn't get it to work. SO, I simply brought the image in, used CornerPositioner to get it into the right perspective, and then added keyframes at five places to keep in. That worked fine, but eventually I would come back and all the key frames would be gone. The first time I assumed user error. After three times I suspect either a bug, or something I simply don't understand.

I moved on to the mapping. I started with Magic Map, but it couldn't handle the complexity of the scene (to many details, less than optimal contrast between objects). I then tried a Bitmap map, but same issue. So I built a polygon mask and animated that. Unfortunately, I ran into the same issue as with the CornerPositioner -- I keep losing the animation!

Before I give up and go back to After Effects for these tasks, let me ask. What am I doing wrong? Are there better tools in Fusion for what I am doing? Is Fusion actually fantastic and as a noob I am just screwing everything up? What are the best resources for training?

I am running Resolve 19 beta 2.

Details about my rig:

Windows 11
PC is a beast -- I purpose built this 5 months ago to be my editing rig. It pegs the meter on all benchmark tests -- I get all green checks on Black Magic's RAW benchmark took -- it shows I can edit 3:1 RAW in 8K at 150 fps. For those who want the details:

AMD Threadripper PRO 5975WX 32 cores, 3600 MHz, 64 logical processors
256GB RAM
NVIDIA GeForceRTX 4090, 24GB video memory, 16,384 CUDA cores, 2,520 MHz
16TB RAID 10 SSD
Offline

Sander de Regt

  • Posts: 3635
  • Joined: Thu Nov 13, 2014 10:09 pm

Re: Fusion has me completely befuddled

PostSun May 05, 2024 7:27 am

Well, if you did everything the way you just said, you shouldn't be having any issues.

So either you've found bugs (which is quite possible) or you have done something different than you think. 1,2,3a and 3b are all possible and I've done it quite a number of times.
And actually most of your first tries were the right approach.

Planar tracker would be the first choice. Where it gets difficult is the part where you say 'I couldn't get it to work' followed by 'what am I doing wrong'? To find that out, whe have to reconstruct the steps you took in the planar tracker that didn't work for you. Otherwise it's hard to tell where you went wrong.

So let's take a step bakc and see if we can guide you through this.

Can you show us the shot and/or your node setup etc? And we'll take it from there.
Sander de Regt

ShadowMaker SdR
The Netherlands
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostSun May 05, 2024 2:42 pm

NeilMyers wrote:
My task seems relatively easy. I have drone footage circling a building. I have still images I want to composite onto the roofs of the buildings.


The most common method would be to use Planar Tracker Node to track the flat plane like roof, and simply use built in Operation Mode called : Corner Pin: After analyzing a planar surface, this mode computes and applies a matching perspective distortion to a foreground image you connect to the foreground input of the Planar Tracker node, and merges it on top of the tracked footage. Super straightforward. Search for any screen replacement tutorial online in Fusion for more details.

NeilMyers wrote:
I needed three simple things:

1. A way to warp the still image into the right perspective


This is already built into the Planar Tracker Node, but if for whatever reason you want to do it as a separate operation.

Corner Positioner [CPn] and Perspective Positioner [PPn]

The Perspective Positioner is the complementary node to the Corner Positioner node. It “unpins” an image by positioning corner points on a perspective distorted area, thereby removing the perspective from the image. This function can also be used to wobble and warp the image by animating the points over time.

The Corner Positioner can be used to position the four corners of an image interactively. This would typically be used to replace a sign or other rectangular portion of a scene. Connect all corners to Paths or Trackers for animation purposes.

You can find how to connect them and what features they offer in the reference manual.

But like I said, Planer tracker already offers corner pin, merge, and tracking in one node.

NeilMyers wrote:
2. A way to animate the perspective over time since it changes.


Planer tracker already offers corner pin, merge, and tracking in one node. If for some reason you want to do it manually simple keyframe animation would do it, by using Corner Positioner [CPn] or Perspective Positioner [PPn] nodes.

NeilMyers wrote:
3) A way to build some masks:

a) A mask so the image ONLY appears on the roof

b) A mask for extraneous things that should block the still images (trees, other buildings, etc.



a) I would use any number of masking options in Resolve. If its solid masking tools you are looking for, there is plenty of options. Each one is suited for something better than the other. Form your description I would probably choose B-Spline Mask tool.

Bitmap Mask [BMP]
B-Spline Mask [BSP]
Ellipse Mask [ELP]
Mask Paint [PNM]
Polygon Mask [PLY]
Ranges Mask [RNG]
Rectangle Mask [REC]
Triangle Mask [TRI]
Wand Mask [WND]

b) if you are looking for complex masking, depending on what you need, you can combine multiple masking tools so for example you can use any of the keying tools, like 3D keyer, or HSL or Luma Keyer or Wand tool (works like magic wand from Photoshop, but on steroids) and than use Mat Control to combine multiple mask tools for more complex stuff.

Masking in Fusion is similar to Photoshop except it has more options and more advance ways to mask in Fusion. But the basic principles are the same.

Magic Mask is also available if you are in Studio version of Resolve.

NeilMyers wrote:
At first I was super exited about Planar Tracker -- that seemed perfect! But I couldn't get it to work. SO, I simply brought the image in, used CornerPositioner to get it into the right perspective, and then added keyframes at five places to keep in. That worked fine, but eventually I would come back and all the key frames would be gone. The first time I assumed user error. After three times I suspect either a bug, or something I simply don't understand.


Its not a bug. Its excepted behaviour and its in the manual.

What the Planar Tracker Saves

While the Planar Tracker does save the resulting final track in the composition on disk, it does not save temporary tracking information such as the individual point trackers (compared with the Camera Tracker, which does save the individual point trackers). Some consequences of this include:

— The point trackers no longer appear in the viewer when a comp containing a Planar Tracker node is saved and reloaded.

— Tracking may not be resumed after a comp containing a Planar Tracker node has been saved and reloaded. In particular, this also applies to auto saves. For this reason, it is good to complete all planar tracking within one session.

— The size of composition files is kept reasonable (in some situations, a Planar Tracker can produce hundreds of megabytes of temporary tracking data).

— Saving and loading of compositions is faster and more interactive.

If you need to reapply the data from Planar Tracker, once you have done the track. you can export the tracking data to be used on other nodes, multiple of them, with a tool that Planar tracker generates. its called: Planar Transform Node [PXF]

The Planar Transform node applies perspective distortions generated by a Planar Tracker node onto any input mask or masked image. The Planar Transform node can be used to reduce the amount of time spent on rotoscoping objects. The workflow here centers around the notion that the Planar Tracker node can be used to track objects that are only roughly planar. After an object is tracked, a Planar Transform node can then be used to warp a rotospline, making it approximately follow the object over time. Fine-level cleanup work on the rotospline then must be done.

Depending on how well the Planar Tracker followed the object, this can result in substantial time savings in the amount of tedious rotoscoping. The key to using this technique is recognizing situations where the Planar Tracker performs well on an object that needs to be rotoscoped.
A rough outline of the workflow involved is:

1 Track: Using a Planar Tracker node, select a pattern that represents the object to be rotoscoped. Track the shot (see the tracking workflow in the Track section for the Planar Tracker node).

2 Create a Planar Transform node: Press the Create Planar Transform button on the Planar Tracker node to do this. The newly created Planar Transform node can be freely cut and pasted into another composition as desired.

3 Rotoscope the object: Move to any frame that was tracked by the Planar Tracker. When unsure if a frame was tracked, look in the Spline Editor for a tracking keyframe on the Planar Transform node. Connect a Polygon node into the Planar Transform node. While viewing the Planar Transform node, rotoscope the object.

4 Refine: Scrub the timeline to see how well the polygon follows the object. Adjust the polyline on frames where it is off. It is possible to add new points to further refine the polygon.

If you use planer tracker for some operations like corner pining, steady and unsteady workflow than you don't need anything other than one or two planer tracker nodes (steady and unsteady copy of the original), depending on what you want.


NeilMyers wrote: I moved on to the mapping. I started with Magic Map, but it couldn't handle the complexity of the scene (to many details, less than optimal contrast between objects). I then tried a Bitmap map, but same issue. So I built a polygon mask and animated that. Unfortunately, I ran into the same issue as with the CornerPositioner -- I keep losing the animation!


Other than a bug, I think you are probably doing something wrong, but I can't say what you are doing to tell you without seeing your workflow. Mostly you have described your frustrations and not much in the way what exactly you did or didn't do so its hard to know.

NeilMyers wrote: Before I give up and go back to After Effects for these tasks, let me ask.

[a]What am I doing wrong?


If I had to guess I would say the most common problem is that you are like so many Adobe users trying to force Fusion to work as After Effects, which not only limits your options , but ends up frustration. Fusion is a different animal. It has many solutions and tools that you don't have in After Effects or don't approach it same way and in Fusion solve many problems complex and simple. But to take advantage of it, you have to learn how Fusion is doing things not After Effects.

Unlearning 15 years of comfortable workflows in Adobe can be a serious challenge, I understand, but I think that is the biggest issue here. Your attempt to force Fusion to be After Effects is where you spend most of your energy in frustration instead of learning Fusion as what it is.

NeilMyers wrote:
/b] Are there better tools in Fusion for what I am doing? Is Fusion actually fantastic and as a noob I am just screwing everything up? What are the best resources for training?

I am running Resolve 19 beta 2.
[/quote]

I would suggest you start with the manual, reference manual. It is easy to reference any node, how to connect it, what all the options do and often how does it relate to other nodes. There are even tutorials in it.

I found a lot of a good stuff in the Resolve/Fusion official reference manual where they explain all the nodes and tools and what they do.

https://elements.tv/blog/davinci-resolv ... l-is-here/

For me that was a good start , because you can actually start to understand why do these tools exist in the first place. And what their functions are. There is a very good set of tutorials from original eyeon people who built the Fusion.

I would start with those. They offer great explanation of fundamentals and creative ways to use it. But unlike many half backed tutorials online these are really solid. And very inspiring.

EyonFusion they have their official YouTube page with all the videos.

https://www.youtube.com/@eyeonsoftware

Good luck. Ask if you have any issues.
Offline

NeilMyers

  • Posts: 15
  • Joined: Thu Apr 22, 2021 11:35 pm
  • Real Name: Neil Myers

Re: Fusion has me completely befuddled

PostSun May 05, 2024 4:02 pm

I don't see a way to respond to individual response, so I'll respond here to both Kruno and Sander. Thanks for reading my post, and putting up with my frustrated tone (arrgggh!).

The main thing I take away is to post a second post with details of what I did so people can offer suggestions. That will be my first task this morning.

Kruno -- I saw your detailed explanation about the "feature" of Planar Tracker not remembering stuff. I wasn't clear -- it was CornerPositioner (and, later, the poly mask tool) that was "forgetting" my keyframes. Is this also expected behavior for these tools?

I will push back on trying to make Fusion work like After Effects. Not at all! In general I LOVE how different DaVinci Resolve is. So far, these differences have ALL been for the better (Color is the best example). And I don't mind working to learn the differences (I have been reading the manual, and the tool reference, and watching many videos).

My frustration is not that it is different. Rather, I am frustrated that it isn't working the way I *think* it has been described. Is it something I am doing wrong? I sure hope so! That would be FAR better than a bug.

Thanks again, I appreciate your generous (and patient) responses.
Offline

NeilMyers

  • Posts: 15
  • Joined: Thu Apr 22, 2021 11:35 pm
  • Real Name: Neil Myers

Re: Fusion has me completely befuddled

PostSun May 05, 2024 5:41 pm

Progress! The TLDR version is I did a lot more research last night, and tried PlanarTracker again this morning and it worked. Yay!

I have a few questions about what to do for the rest of the shot (masks, adding other planar transforms to this same footage), but I'll do research on my own and see if I can get it myself before posting a question.

Here is a link to a quick YouTube upload of the result.



And, here is a link to a Word document that contains the six page description in case there was something I did that could have been done in a better way.

https://www.dropbox.com/scl/fi/vurbcw6ltzili2nbc3wv5/Using-PlanarTracker-2024-05-05.docx?rlkey=g7hfcsnk17sye20mb1n1vj05f&dl=0

Thanks for everyone's help!
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostSun May 05, 2024 6:29 pm

NeilMyers wrote:Kruno -- I saw your detailed explanation about the "feature" of Planar Tracker not remembering stuff. I wasn't clear -- it was CornerPositioner (and, later, the poly mask tool) that was "forgetting" my keyframes. Is this also expected behavior for these tools?


I see. Well The polygon mask and B-spline tools have by default option turned on to animate key frames based on any changes you make the to mask. This is done so you can start using the tools for roto right away and get instant animation. Anytime you animate something by changing a parameter of points key frame is auto generated on the timeline and other keyframes are interpolated between the ones user adds. To get smooth animation with as manual keyframes as needed.

This is default behaviour, but it can be changed. Or keyframes deleted. Weather you did that or not, I can't say.

"A B-Spline mask is identical to a Polygon mask in all respects except one. Where Polygon masks use Bézier splines, this mask node uses B-Splines. Like the Polygon mask tool, the B-Spline mask auto-animates. Adding this node to the Node Editor adds a keyframe to the current frame. Moving to a new frame and changing the shape creates a new keyframe and interpolate between the two defined shapes. The B-Spline node can be used to generate a single smooth spline shape or combined with other masks for more complex shapes"

I personally prefer B-Spline over Polygon mask because it takes less points to roto an object. Less points, means its easier to make changes to less points, Other than that, its identical.

There is tones of options for each, but "losing" keyframes is not what I'm used to. I can only assume you might have not selected the correct node or something simple, because its possible to have one node selected and see the sliders of another and yet watch in viewers other nodes. Useful for complex process, but if you are new to Fusion, maybe its just that simple and you had one node selected and was thinking you had the correct one.

Anyway, you can see here that the B-Spline is not even connected to the main footage but I can work on the image and also you can see that the animation of keyframes is turned on by default. This can be unexpected if you are not aware of it, so maybe you just had the wrong one selected.

sshot-3978.jpg
sshot-3978.jpg (66.95 KiB) Viewed 1263 times


You can always check the spline editor to see if there is any keyframes used by the tools.

sshot-3979.jpg
sshot-3979.jpg (166.08 KiB) Viewed 1263 times


As you continue to work on your footage and roto, each time you change the shape to adjust it, new keyframe is added, in-between there is interpolation for smooth movement. You can change it, but that is default behaviour.

sshot-3980.jpg
sshot-3980.jpg (145.7 KiB) Viewed 1263 times


NeilMyers wrote: I will push back on trying to make Fusion work like After Effects. Not at all! In general I LOVE how different DaVinci Resolve is. So far, these differences have ALL been for the better (Color is the best example). And I don't mind working to learn the differences (I have been reading the manual, and the tool reference, and watching many videos).

My frustration is not that it is different. Rather, I am frustrated that it isn't working the way I *think* it has been described. Is it something I am doing wrong? I sure hope so! That would be FAR better than a bug.

Thanks again, I appreciate your generous (and patient) responses.


Well, its good to hear you are willing to learn. Yes, it must be just how you expect the tools to work rather than how they are expected to work. Once you get used to it, I think your frustrations will be gone.

NeilMyers wrote: And, here is a link to a Word document that contains the six page description in case there was something I did that could have been done in a better way.


1 Add PlanarTracker tool (output of my video footage goes into background of PlanarTracker node.

Good.

2. Drew polygon on my footage to define the area onto which I wanted to post the still:

Good.

...........................................

3. Set-option in PlanarTracker tool

Good.

Set and go buttons are important. Set simply defines the reference frame or where you can use other options to reference it, and go takes you to reference frame. Before you start tracking, other than being on frame 0, just click set button. It seems you have done that. Good.

"The Reference Time determines the frame where the pattern is outlined. It is also the time from which tracking begins. The reference frame cannot be changed once it has been set without destroying all pre-existing tracking information, so scrub through the footage to be tracked and choose carefully. The reference frame must be chosen carefully to give the best possible quality track. You choose a reference frame by moving the playhead to an appropriate frame and then clicking the Set button to choose that frame."

Tracker

There are two available trackers to pick from:

— Point: Tracks points from frame to frame. Internally, this tracker does not actually track points- per-se but rather small patterns like Fusion’s Tracker node. The point tracker possesses the ability to automatically create its internal occlusion mask to detect and reject outlier tracks that do not belong to the dominant motion. Tracks are colored green or red in the viewer, depending on whether the point tracker thinks they belong to the dominant motion or they have been rejected. The user can optionally supply an external occlusion mask to further guide the Point tracker.

— Hybrid Point/Area: Uses an Area tracker to track all the pixels in the pattern. Unlike the Point tracker, the Area tracker does not possess the ability to automatically reject parts of the pattern that do not belong to the dominant motion, so you must manually provide it with an occlusion mask. Note that for performance reasons, the Hybrid tracker internally first runs the Point tracker, which is why the point tracks can still be seen in the viewer.

There is no best tracker. They each have their advantages and disadvantages:

— Artist Effort (occlusion masks): The Point tracker will automatically create its internal occlusion mask. However, with the Hybrid tracker, you need to spend more time manually creating occlusion masks.
— Accuracy: The Hybrid tracker is more accurate and less prone to wobble, jitter, and drift since it tracks all the pixels in the pattern rather than a few salient feature points.
— Speed: The Hybrid tracker is slower than the Point tracker.
In general, it is recommended to first quickly track the shot with the Point tracker and examine the results. If the results are not good enough, then try the Hybrid tracker.
Motion Type

Determines how the Planar Tracker internally models the distortion of the planar surface being tracked. The five distortion models are:

— Translation.
— Translation, Rotation (rigid motions).
— Translation, Rotation, Scale (takes squares to squares, scale is uniform in x and y). — Affine includes translation, rotation, scale, skew (maps squares to parallelograms).
— Perspective (maps squares to generic quadrilaterals).

Each successive model is more general and includes all previous models as a special case.

Translation, Rotation, Scale seems to be good for drone footage. If the perspective does not work.

...........................................

4 - Pressed “Track to End”, and watched it create the tracking data. NOTE: It got wonky as the drone footage got to an oblique position with a giant tree blocking the plane I was focused on:

By wonky I mean the bounding area was jumping all over the place So, I stopped the operation and deleted the tracking data after the frame just before this started

Yes. Its generally good idea to watch the footage before tracking for any possible occlusions like the tree. If you have one, you can create occlusion mask to tell the track not to pay attention to the thing you have designated as occlusion mask. Occlusion mask can be anything, from simple shape drawn with the polyligon mask to using color selection keyer and using that as a mask.

Depending on the footage, sometimes if you have a red car driving trough the forest, its best to color select the red car and make it only thing to track, the car paint. That is automatic exclusion of everything else.

With footage like some random stuff on a building and a tree in front, probably you will have to rotoscope a tree with a lose garbage mat, B-Spline or Polygon tool will do fine, and than hook that mask to Planer Tracker in the occlusion mask input.

- Occlusion Mask: The white occlusion mask input is used to mask out regions that do not need to be tracked. Regions where this mask is white will not be tracked. For example, a person moving in front of and occluding bits of the pattern may be confusing the tracker, and a quickly-created rough rotomask around the person can be used to tell the tracker to ignore the masked-out bits.

— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the output of the Planar Tracker to certain areas.

Also "The point tracker" option possesses the ability to automatically create its internal occlusion mask to detect and reject outlier tracks that do not belong to the dominant motion. Tracks are colored green or red in the viewer, depending on whether the point tracker thinks they belong to the dominant motion or they have been rejected. The user can optionally supply an external occlusion mask to further guide the Point tracker.

So if you have anything other than a footage with no distractions, I would choose point tracking, to ignore any disruptions. If its not enough, use occlusion mask you made yourself. Hybrid Point/Area is more steady way to track with less micro jitters, but needs clean source to be successful.

........................................

5. Next, I added my image, followed by a color correction node, followed by a corner position node

This step is not needed since all that is done inside the planar tracker node, with the corner pin option. Making corner position node redundant, and you would not use it like that anyway. Its more of a node to use on static footage when you want to change perspective of something and put it back into perspective after a change. Also it can be used as Planer Tracker cornor pin option when using ordinary tracker, or more precisely four trackers, and you have enough data to corner pin something, but unlike planar tracker, ordinary tracker, has no option to corner pin, so this node you used could be used. With planer tracker its actually counter productive, since its not as efficient.

6. I then created the Planar Transform using the“Create Planar Transform” button:

7.I then placed the Planar Tracker between my media segment and the footage segment. Like this 6.I then created the Planar Transform using the“Create Planar Transform” button:7.I then placed the Planar Tracker between my media segment and the footage segment. Like this:AND ... it works! Why is this working today, and not yesterday? I think because I had done some serious video watching last night to clear up some misconceptions I had about where to place

While its possible to make it work like that. Its like using bazooka to kill a fly, and not worrying about collateral damage. Its just not needed. Corner pin option inside the Planer tracker will do all that plus more. You have blend modes and merge node all in one, plus tones of other option.

One thing about Fusion to keep in mind is that almost every node, is actually a whole toolset that has more options than some applications. So usually its a multitool. Paint node for example is more versatile than whole of Photoshop tool set or about that much. Its crazy what you can do with one node.

By the way, about planar tracker. If you need to do lot of work on the stablized area. Another way to work is to use steady/unsteady workflow. You track with your planer tracker. You put it to steady mode, and that will stabilize the tracked area. Also its a good way to preview how stable the track is when you are done. Whatever you set your refernce time to be in the track mode, do the same for steady mode. Set the same refernce time.

Anyway, you set tracker to steady, and you copy the same tracker node, place it a bit further down the line and you check invert steady mode, to bring back the original motion. Now you place as many nodes or anything you like between steady and unsteady nodes. And you do your work there. That way you work on stabilize footage and you can paint, retouch, add graphics, anything you like and than you use unsteady copy of the original tracker to bring back the original motion. Its a good way to work when you have multiple operations on the footage.

................................

8 - Here is a frame from the result:You can see Bruce Springsteen composited onto the roof of the Flamingo hotel (stock image of Bruce).I fade the image in and out so I don’t have to deal with the wonkiness that happens late in the Planar Transform. That actually works for the storyline anyway.

There you go. Kudos.

9.So, now I have to do the following:a.General clean-up (like blend mode to multiply, a little work on color).b.Create TWO masks:i.One for the tall column (the FLAMINGO sign)ii.One for the roof itself (so the image doesn’t spill beyond the surface of the mask).c.Repeat the process for adding images on top of theother roofs

Yes, that would be the next steps. You can use polygon or B-spline to roto in one of two ways. Actually more than two, but these two are best.

Either use planer transform from the tracker to steady the roto work, so you roto the sign on first and last frame, and than see if there is any need to roto in between or if its linear steady movement, because of planer transform, usually there is only one or two frames to adjust.

Or if you have bunch of stuff to do , use steady / unsteady approach with does the same, but you don't have to use planer transform node. And its more flexible way to work where there is more stuff to do.

I'll post a video I did to illustrate this.
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostSun May 05, 2024 6:36 pm

NeilMyers wrote:Progress! The TLDR version is I did a lot more research last night, and tried PlanarTracker again this morning and it worked. Yay!


Cool. Nice progress. I didn't know what footage you are using so I found some drone footage, to illustrate the process, one less complex than the other.

I put some checkerboard pattern instead of Bruce and I rotoscoped the cars , on the second video its more complex version, so I could have done a better job with more time. Anyway, here is the concept.

Track & Roto Test



sshot-3974.jpg
sshot-3974.jpg (213.16 KiB) Viewed 1263 times


sshot-3973.jpg
sshot-3973.jpg (210.93 KiB) Viewed 1263 times


Also in the settings tab of the planer transform node, you can activate motion blur, for more realistic integration with some moving footage.

Also you can work with two viewers. One for final results. One for adjusting the thing you are pinning. So you can do any adjustments in understoted mode. Like this.

sshot-3975.jpg
sshot-3975.jpg (268.28 KiB) Viewed 1263 times
Offline

Kel Philm

  • Posts: 613
  • Joined: Sat Nov 19, 2016 6:21 am

Re: Fusion has me completely befuddled

PostSun May 05, 2024 8:42 pm

I never knew there was a perspective positioner node!
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostSun May 05, 2024 9:30 pm

Kel Philm wrote:I never knew there was a perspective positioner node!


You probably never had to use it. Personally I don't find often I need to as well. But it could be used for manual corner pining or using it to change perspective in order to paint on a more flat looking image and than use corner positioner to put it back in place, but its a cumbersome use case.

Here is typical way to set it up. Not fun to work with , though since you are eyeballing it. Here is basic manual set up and example.

sshot-3976.jpg
sshot-3976.jpg (177.07 KiB) Viewed 1209 times


sshot-3977.jpg
sshot-3977.jpg (168.48 KiB) Viewed 1209 times


The way to work with it is if you have four trackers for each corner. Than it basically replaces the corner pin operation in the planar tracker, but allowing you to do the same with regular tracker. Although to make it match you have to publish the four corners coordinates in the tracker and connect to it with the two positioner nodes.

This is also something that can be done if you use Mocha for tracking and export tracing data to fusion. Since Mocha is planar tracker it gets the data that can be used this way.
Offline

birdseye

  • Posts: 376
  • Joined: Fri Jun 12, 2020 2:36 pm
  • Real Name: Iain Fisher

Re: Fusion has me completely befuddled

PostTue May 07, 2024 6:29 am

I would advise against the steady/unsteady workflow, it will soften your original background. if you load the original plate in the A channel of the viewer and the steady/unsteady image in the B channel, then zoom in you can clearly see the softening. If you are going to steady to do your work, then unsteady just your work and merge that over the original plate.
Offline

Hideki Inoue

  • Posts: 239
  • Joined: Sun Nov 23, 2014 8:26 am
  • Location: Tokyo, Japan

Re: Fusion has me completely befuddled

PostTue May 07, 2024 7:37 am

NeilMyers wrote:I am running Resolve 19 beta 2.

Here is my first concern. I recommend v18 for now.
Fusion Studio v18.6.6 / DeckLink Mini Monitor 4K
Intel® Core™ i9-13900KS / 128GB RAM / Nvidia Geforce RTX 4090 (546.33) / Windows 10 Pro 22H2
Intel® Core™ i9-7980XE / 128GB RAM / Nvidia RTX A5000 (536.67) / Windows 10 Pro 22H2
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostTue May 07, 2024 2:26 pm

birdseye wrote:I would advise against the steady/unsteady workflow, it will soften your original background. if you load the original plate in the A channel of the viewer and the steady/unsteady image in the B channel, then zoom in you can clearly see the softening. If you are going to steady to do your work, then unsteady just your work and merge that over the original plate.


Yes, it does soften the image if you are working on the entire image. When this steady/unsteady workflow is done the best approach is to work only on the patches you are changing and leaving everything else as is, by working with transparency. Weathers is paint/clone process, by using background node with alpha set to zero, you are only painting on the area you need to, so no softening of the entire image, only small portion of it, that can be either sharpen a bit to match the original or if we re-grain the footage the grain hides it. If working with clean plates, its best to use a mask on the area we are changing and rest is again transparent. This avoids the softening problem.

The big advantage of steady/unsteady workflow is that its not necessary to stabilize individual elements you might be adding, like multiple retouching areas or something like that. As long as you work with transparency, its all good. But yes, if you don't use transparency than you will soften the original footage.
Offline

birdseye

  • Posts: 376
  • Joined: Fri Jun 12, 2020 2:36 pm
  • Real Name: Iain Fisher

Re: Fusion has me completely befuddled

PostWed May 08, 2024 10:18 am

I'm not sure you have understood or I have misunderstood what you mean, if you steady/unsteady the background plate, that is not the same as not steadying. Steady/unsteady does a planar transform process twice on the image, not steadying means there is no planar transform process it at all. So if you pass your image through a Planar Tracker to steady it, then do your work, then pass the whole lot through another Planar Tracker to unsteady it, you end up with a third iteration of the original background plate, transparency or no transparency.
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostWed May 08, 2024 11:09 am

birdseye wrote:I'm not sure you have understood or I have misunderstood what you mean, if you steady/unsteady the background plate, that is not the same as not steadying.


No its not, I agree. I don't think I misunderstood, I just think that advice against Steady/unsteady workflow that you have made in previous comment, needs further explanation, that's all.

You have made a comment at the end: "If you are going to steady to do your work, then unsteady just your work and merge that over the original plate."

That will work, but there are other ways to minimize the softening or avoid the worse of it as well. It doesn't invalidate the whole process, since you have advised against it, I felt I should expand on that topic for more clarification.

There is no approach to doing work on unstable footage that does not involve some kind of trade off. Weather its tracking time or loss of quality or more time consuming process of setting it up or whatever it is. But I find that Steady/unsteady workflow has more advantages than disadvantages and disadvantages can be overcome to the point that they are not a real problem. Since its easy to do and quick to work with, when you need to do a lot of clean up work or something similar, personally I find it more convenient and quicker than other methods and in the end, when done with few problems in mind, the results are worth it a bit of softening that you can hide and no one will notice.

birdseye wrote: Steady/unsteady does a planar transform process twice on the image, not steadying means there is no planar transform process it at all. So if you pass your image through a Planar Tracker to steady it, then do your work, then pass the whole lot through another Planar Tracker to unsteady it, you end up with a third iteration of the original background plate, transparency or no transparency.


That's true. In practical sense it doesn't really matter since you won't notice the softening if you use transparency for the areas of the image that you are not working on. Therefore there is no visiable transformation happening there at all.

For example if you need to paint out a tattoo or or scar or something. You track the area with a planar tracker, you set it to steady mode, you copy the planar tracker and invert the steady mode to bring back the motion to original shaky before you output the whole thing. This will introduce softening as you pointed out because of multiple transformations.

But we only need to work on a portion of the image, like a tattoo, not the whole of image. If I use the paint tool all I do is use background node set to alpha zero, for matching the resolution of the footage, and leaving it as a canvas to paint on that is transparent except in the area where I paint. Than I hook up paint node to that background node, and use merge node to connect it between two planar (steady/unsteady) nodes. That way I can load up into source footage for cloning the steady planar transform, paint on a steady part of the image, and not the rest of the footage so no softening except a little bit in the small portion where I'm painting. Because its all on merge node, I can set the merge node to lighten or darken blend modes to affect only luminosity values and you won't notice anything strange. Unless you pixel peep at 200%.

The reason I do this with steady/unsteady workflow insteady of just transform node is for several reasons.

One reason is that sometimes I want to do multiple paint, patch, texture stuff on the same area and using this process its easy to work on something as if it was a static frame, instead of matching the movement later.

Second more important reason is that paint/clone tool in Fusion is great for many things, but one thing is bad for is some type of skin retouching, because it only clones it does not heal like in Photoshop. That is when I make a clean plate and use Photoshop for skin work, and than I bring in the clean plate back and freeze frame it with time strecher or time machine nodes. Since its done on a steady mode, it will match the movement, the only thing it needs it matching the lighting changes and masking it only to the portion of the footage I want to change. This eliminates the problem of softening except the portion of the image I just used clean plate on, but like I said, usually if there is a visible problem a little bit of sharpening to bring back skin texture or re-graining the footage at the end, makes softening a none visible issue, while providing a lot of advantages to the process.

I don't know if that is what you do or had in mind, but I think its important to point out the advantages of the process and not simply advise against it wholesale.

.........................

When you are doing 3D tracking and clean up with projections, you have in essence the same problem of Steady/unsteady workflow and same softening problem that can be overcome in the similar way.

For those reading it that are unfamiliar with it. I only found Nuke tutorial explaining the process and showing the problem of softening one one does this and not just showing the method, but its essentially the same process and problem in Fusion.

Comments section is full of useful tips to minimize the problem and same is more or less true for planar tracker when using this approach. Minus the projection of of course.

Cleanup made easy with UV Projection



Fusion variation of this, and there are more than one, can be seen here:

Fusion Advanced - Simple UV Unwrapping



Speaking of this workflow weather its 3D or 2D, it also depends on how big the patch of the image is, and what the operation is and what the footage is. So the whole Steady/unsteady workflow as much as it ads a bit of softening to the image, when used properly the advantages outweigh the disadvantages and in some situations the advantages are too great to ignore while the disadvantages, softening of the image being the main one, can be masked out, covered up by other effects, or its in the place of the image or done with the kind of footage where it won't be a problem at all.

That being said, there are other options of course. Inserting something in the footage and using planer transform to stabilize it won't introduce the softening, but if you are doing complex work with a lot of nodes its much easier to simply use steady and unsteady copy of planar tracker instead. There are other options like third party tools. Mocha pro for example. Or macros like the "SMART VECTORS" or using Surface Tracker for essentially the same kind of Steady/unsteady workflow.
Offline

birdseye

  • Posts: 376
  • Joined: Fri Jun 12, 2020 2:36 pm
  • Real Name: Iain Fisher

Re: Fusion has me completely befuddled

PostThu May 09, 2024 8:44 am

I would have thought it easier to unsteady just the work that would be added, you wouldn't even need to leave the Planar Tracker, just swap the mode from Stabilise to Corner Pin, set the Corner Pin to the full raster and merge the work on that way. If you need a mask, create a Planar Transform. pass the mask through the Planar Transform that will drive your mask.
I have never used Nuke but I think Planar Tranforms can be concatenated, that's not the case in Fusion as far as I know, so I guess that would make Nuke more suitable for the steady/unsteady workflow.
Offline

Hendrik Proosa

  • Posts: 3084
  • Joined: Wed Aug 22, 2012 6:53 am
  • Location: Estonia

Re: Fusion has me completely befuddled

PostThu May 09, 2024 9:28 am

Cornerpins (planar tracker creates one too) and transforms concatenate with each other in Nuke, but only when they follow directly. Concatenation is basically just a set of matrix multiplications and last op does the filtering using final concatenated matrix. Non-transform derived ops break the concatenation because you can't concat a filter or paint job with a 4x4 matrix. So logic is the same as in Fusion for practical application.
I do stuff
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostThu May 09, 2024 12:37 pm

birdseye wrote:I would have thought it easier to unsteady just the work that would be added, you wouldn't even need to leave the Planar Tracker, just swap the mode from Stabilise to Corner Pin, set the Corner Pin to the full raster and merge the work on that way.


Yes that would seem logical, until you try certain operations and discover the limitations.

For example, f you nee to add a sign on a plane even surface, replace a screen, this method and tool were made for it. Works really well. But if you need to add something to uneven surface like a folding fabric of a T-Shirt or skin folding of a person, you would have to use something like Surface Tracker. Even better is Mocha Pro. Mainly because of the insert module and ability to match color and tone over time of the shot by using other frames to calculate differences in changing lighting conditions. So you can do a static clean plate and apply it across the whole range. Object removal is the closest thing in Fusion that does this, but its much more limiting compared to Mocha.

If you use Corner Pin mode in planar tracker to say, remove a tattoo, even on a fairly flat surface, like back of a person perpendicular to the camera you would have to paint the tattoo with the source being whole frame, but you only tracked a portion of it. Hence you get into trouble with various dimensions and stable and unstable part of the footage. Its easy if you want to add a tattoo. Problem when you want to remove an existing one.

What about using planar transform node? This would be more double, you could paint on one frame and mask it and use planar transform to stabilize it. While it would work, there are several disadvantages for retouching work.

a) You are working on unstable footage, and stabilizing it after painting / cloning, which is a manual process of painting. If you use something like patch replacer or something similar, than its mostly just pick a source and destination and stabilize with planar transform. Not problem. But not so when you use paint tool or clean plate.

Patch replacer is awesome and I use it all the time. Its automatically healing the texture and tone, its GPU accelerated so its fast, its easy to use. Great. Expect one thing. The shapes for source and destination are more or less limited to only size and position choices, you can't even rotate it. And you can only choose rectangle or ellipse for shape.

But what if the tattoo you want to remove cannot fit into that existing shape of the patch replacer? Sometimes its possible to use more than one patch replacer after another to cover more area, but not always.

Here is screenshot of the problem of using this tool to remove a tattoo of irregular shape close to areas you want to keep.

sshot-3696.jpg
sshot-3696.jpg (167.36 KiB) Viewed 599 times


sshot-3697.jpg
sshot-3697.jpg (167.55 KiB) Viewed 599 times


So in that case, you can't use corner pin mode, since you are removing a tattoo, and you can't use patch replacer to do it all the way, even if sometimes can get you started.

Thus, you have to either use paint tool in clone mode or use a clean plate and freeze frame it.

If you use paint tool as clone tool you can have it sample an area over time, and match the lighting and color and texture of the nearby source and that is all you need. Unfortunately, the problem is that unlike Photoshop heal tool or Patcher tool, paint tool only has clone mode, no heal mode. This is a serious problem in skin retouching, where you need soft and blended transitions between areas you work on, and clone tool leaves very harsh transitions, leaving trace of where the work has been done. Sadly blackmagic and eyeone software before them never included heal mode into paint tool, otherwise it would be perfect tool.

You can somewhat mitigate this problem by using pressure sensitive table, like Wacom stylus pen, and be very soft with your clone strokes and you can sometimes set paint tool on a merge node and use merge node blend mode of darken or lighten to only paint over areas you want, its an old Photoshop trick.

If instead of dealing with clone tools issue of only cloning and not healing, you choose to freeze frame and export a frame to heal it in another application like Photoshop, and than re-import the clean plate frame, you get proper retouching, but you have another problem, matching the color and tone of one frame that is always static over time, and you can't have one frame over every other frame on top, so you have to limit it to the area you want to retouch by using a mask.

This is where mocha pro works the best. It can track as good and better than Fusion tracker, and it can use even equivalent to surface tracker, something with the mesh, I forget the name so the texture bends as the original tracked service. But most importunately, the technology driving remove module and insert module in Mocha is so awesome for this type of work, because it can annualize all the frames over tracked timeline and calculate the colors and lighting changes and adjust the newly inserted clean plate.

The closest thing I've come to this in Fusion by using existing tools is with steady/unsteady workflow and clean plate. If I can just use clone tool than its great, lighting will match and all is great. If I can't clone to the point where retouching looks good, than I try to use clean plate and retouch in Photoshop and try to match the lighting over time later in Fusion. I'll mention my experiments for this later.

Anyway, this is how node tree might look like.

sshot-3783.jpg
sshot-3783.jpg (178.99 KiB) Viewed 599 times


But for the sake of this discussion, I will illustrate how you can get close with a planar tracker, steady/unsteady workflow, paint/clone and color matching. Far from perfect results, but its possible to get close.
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostThu May 09, 2024 12:47 pm

As you can see here. I can get close even with a very challenging shot where light, shape, and everything changes.

original video by cottonbro studio at pexels.

https://www.pexels.com/video/woman-art- ... p-4124400/

sshot-3703.jpg
sshot-3703.jpg (135.84 KiB) Viewed 597 times


sshot-3698.jpg
sshot-3698.jpg (132.62 KiB) Viewed 597 times


As you can see, its far from perfect results, but it can be done to get close, and also there is no softness compared to original except in the patched / retouched area, this is because I either paint on background node with alpha set to 0 and clone source can be any remove source, just drag it into the source target area. If there is any problems one can always merge the whole thing on top of the original and only mask out the area where you need it.

sshot-3704.jpg
sshot-3704.jpg (213.43 KiB) Viewed 597 times


The reason why steady/unsteady workflow works better here than using planar transform node, is because you are painting on steady area and not steadying it after you painted. Makes the process of painting/cloning far easier and better. And as you can see softness is not an issue.

What is the issue in this particular shot is the quality of the retouch. This is because for real job I would not use planar tracker for this, its just wrong tool for the job, but I wanted to illustrated the process.

I would use either surface tracker or better yet mocha pro. And paint few clean plates in photoshop and let mocha match the lighting and since its folding skin I would use their mesh option.
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostThu May 09, 2024 12:50 pm

Using surface tracker to steady and unsteady as with planar tracker works better, since it creates its own masked area to stabilize and warp as needed, along with tracking. The only problem is clone tool only able to clone, so clean plate it better approach, but that leaves problem of matching color. I'll talk about my latest experiment in this later.

sshot-3953.jpg
sshot-3953.jpg (136.54 KiB) Viewed 596 times


sshot-3954.jpg
sshot-3954.jpg (133.65 KiB) Viewed 596 times


As you can see this approach delivers better results, but there is problem of matching color over time when using clean plate vs using native clone / paint tool. So it really depends on a specific footage.
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostThu May 09, 2024 1:01 pm

Matching color of one object to the next is important process in Fusion. As the name suggests, Fusion is about fusing things together, compositing them in a believable way.

There are various methods how to match colors and tone of one element to the other in Fusion, and some are static and some are dynamic and change adopt to changing conditions, but none is perfect for everything and usually its best to use them for each situation accordingly.

Few days ago I discovered something that is more or less prefect color match tool in Fusion, so here is how it works, what it is, and where you can find it.

**Just Testing Dynamic Color Matching and Light Wrap In Blackmagic Fusion (WIP)**



Footage of some fire with B&W checkerboard pattern and text that says “Color Match” merged on top. I was trying to find a good way to dynamically match color and tone and light wrap it all. Its a constant search for best method. So this is WIP or Work In Progress. There are other approaches with the probe modifiers, and some other nodes that I’ll mention, but this method seems to work the best so far… for me.

This quick little test is done with two macros as proof of concept.

………………………………………………………

**Color Inspector Pro** v1.82 - 30 April 2020 macro by Sam Treadway @Intelligenet Machine & Milolab (Macro is available on Reactor for Fusion)

Originally made to get color values out of an area by extracting average, minimum or maximum values in RGB. But it can also be used to create an average color of a footage on any given frame dynamically updates. All that is than needed is to set it to geometric blend mode found inside the merge node and its a perfect color match that updates as the source footage changes.

Geometric blend mode is good for HDR images that have out of range colors above 1, For values above zero the result is 2 times the foreground times the background color divided by the foreground plus background color.

Out = 2*Fc*Bc / (Fc+Bc)

But whatever its intention, it works perfectly for this purpose. It can be used for all sorts of compositing jobs, like green screen replacements, sign replacements or retouching and matching clean plate color and tone similar to how Mocha Pro does it. Its a very fast method to. Real time results. Unlike the other macro.

………………………………………………………

Its also possible to do this with Color Corrector node that comes with Fusion, in the Histogram Tab set it to “match”, plug in the reference, usually background and it will do something very similar. If the effect is too strong use the same Geometric blend mode in the merge node.

Match: The Match mode modifies the source image based on the histogram from the reference image. It is used to match two shots with different lighting conditions and exposures so that they appear similar.

When selected, the Equalize and Match modes reveal the following controls.

— Match/Equalize Luminance: This slider affects the degree that the Color Corrector node attempts to affect the image based on its luminance distribution. When this control is zero (the default), matching and equalization are applied to each color channel independently, and the luminance, or combined value of the three color channels, is not affected.
If this control has a positive value when equalizing the image, the input image’s luminance distribution is flattened before any color equalization is applied.

If you don’t need dynamic changes in color and tone, just once or few times, Color Curves node does the same. But it will not update automatically. Its either for whole duration or you have to animate it manually. Which is why Color Corrector node is more convenient.

………………………………………………………

**“The wrapper”** macro by legendary Rony Soussan

It seems to work for all situations for all kinds of compositing needs. Super customizable. Its dynamic, so no matter the color and light change it will react dynamically and automatically. I guess the only drawback is that “The Wrapper” is a very old tool/macro, probably more than 15 years since it was created by Rony Soussan.

I think it was originally shipped as part of Fusion before Blackmagic bought it, but if you search for it, you should be able to find it as people keep sharing it. Its great and works really well, but its a bit slow to render. Probably could benefit from a modern version of it. If only it was GPU accelerated it would be perfect tool.

I exaggerated the light wrap effect for well… effect.

………………………………………………………

The 2D elements just have light wrap and color match, but no light direction to match the fire footage. This could be done if the 2D elements were put in 3D space (Image planes) for example and point light was placed where fire is, and probe modifier was used to provide back light to the elements. But that was beyond this particular test. It does work though.

My initial test for that can be seen here:




**Testing Dynamic Color Matching and Light Wrap In Blackmagic Fusion (WIP)**



Footage is: “Wolfgang Langer - A Woman Practicing Her Presentation Skill Through Words And Hand Gestures video”

Original footage is on green screen background.

I was trying to find a good way to dynamically match color and tone and light wrap it all. Its a constant search for best method. So this is WIP or Work In Progress. There are other approaches with the probe modifiers, and some other nodes that I’ll mention, but this method seems to work the best so far… for me.

To test it, I put some rotating lights animation with changing color behind the subject instead of the green screen in order to test color matching as the background change, and light wrap.

This quick little test is done with two macros as proof of concept.

………………………………………………………

This same tool could be used to match color of patches you retouched and placed in a footage, weather its corner pin, like a logo or sign replacement, especially useful when color and light changes over the length of the shot or it could be used for skin retouching work, blending VFX elements onto skin like textures, fake tattoos etc.

Its very fast, delivers fairly predicable consistent results, and works on all types of footage and situations, its easy to set up and its not heavy on the resources. Give it a try.

………………………………………………………

You use ColorInspectorPro macro to hook up background element you want to use as refernce, Input 1, and foreground element you want to match the color of, input 2. You let the tool output one solid color from the background that represents average and you place (merge node) that over the top of the thing you want to change color of. You set blend mode of the merge node, usually to geometric, but you can experiment with other blend modes, and all you need is mask from the original forground element, so you use that to pull blue line to mask input of the merge node.

………………………………………………………

sshot-3998.jpg
sshot-3998.jpg (61.32 KiB) Viewed 592 times


sshot-4002.jpg
sshot-4002.jpg (230.19 KiB) Viewed 592 times


sshot-4003.jpg
sshot-4003.jpg (186.56 KiB) Viewed 592 times
Offline
User avatar

KrunoSmithy

  • Posts: 207
  • Joined: Fri Oct 20, 2023 11:01 pm
  • Real Name: Kruno Stifter

Re: Fusion has me completely befuddled

PostThu May 09, 2024 1:10 pm

birdseye wrote:I have never used Nuke but I think Planar Tranforms can be concatenated, that's not the case in Fusion as far as I know, so I guess that would make Nuke more suitable for the steady/unsteady workflow.


From the resolve manual:

Flatten Transformation (Match Move)

This checkbox appears only when the mode is set to Match Move. Like most transformations in Fusion, Stabilization is concatenated with other sequential transformations by default. Selecting this checkbox will flatten the transform, breaking any concatenation taking place and applying the transform immediately.

Stabilize Settings
The Tracker node automatically outputs several steady and unsteady position outputs to which other controls in the Node Editor can be connected. The Stable Position output provides X and Y coordinates to match or reverse motion in a sequence. These controls are available even when the operation is not set to Match Move, since the Stable Position output is always available for connection to other nodes.

Steady Mode
In Steady mode, the Planar Tracker transforms the background plate to keep the pattern as motionless as possible. Any leftover motion is because the Planar Tracker failed to follow the pattern accurately or because the pattern did not belong to a physically planar surface.

Steady mode is not very useful for actual stabilization, but is useful for checking the quality of a track. If the track is good, during playback the pattern should not move at all while the rest of the background plate distorts around it. It can be helpful to zoom in on parts of the pattern and place the mouse cursor over a feature and see how far that feature drifts away from the mouse cursor over time.

Invert Steady Transform

This causes the Planar Tracker node to reverse the effects of the steady transform. This means two Planar Tracker nodes connected back to back with the second set to invert should give back the original image. If you place an effects node in between the two, then the effect will be locked in place. This should only be used to accomplish effects that cannot be done through corner pinning, since it involves two resamplings, causing softening of the background image.

NOTE Stabilize mode only smooths out motions, while Steady mode tries to completely “lock off ” all motion.

..............

I think Nuke ultimately works similarly, I don't understand all the math, but there was a thread here on this forum about it. I can't find it now, but that was the discussion. what does Transform node do and comparison to Nuke and it was about concatenation.

Hendrik Proosa wrote:Cornerpins (planar tracker creates one too) and transforms concatenate with each other in Nuke, but only when they follow directly. Concatenation is basically just a set of matrix multiplications and last op does the filtering using final concatenated matrix. Non-transform derived ops break the concatenation because you can't concat a filter or paint job with a 4x4 matrix. So logic is the same as in Fusion for practical application.


My math is weak, but that sounds about right.

Return to Fusion

Who is online

Users browsing this forum: No registered users and 18 guests