I am a trainee cutting engineer & being taught on a VMS-80. The other day I was introduced to the concept of the 'mono for stereo' switch the pitch computer control panel. My mind is farting all over the place.
Could someone give me a simple nuts and bolts understanding of this? My previous understanding regarding 'stereo mode' is that a varying depth is determined by changes in phase & level. So the depth will increase whenever the groove becomes effectively awkward to track on playback. For example an out of phase, loud signal would require the most groove depth, any depth decision would be made between those two attributes.
(as a side note: can a stereo groove crash between it's left and right signal?)
So assuming i'm ok with my thinking on that, mono for stereo disregards phase (vertical) attributes and increases outside of the minimum depth with lateral level changes alone? Whats the reason behind this and when would you gauge this as an acceptable method? It's my understanding this helps you cut louder? how?
I have a feeling this is more simple then i've made it out in my head. Would really appreciate some clarification from more experienced cutting engineers.