Carl states that he had no objective after killing John so he created one for himself. I can understand thinking "well why didn't his new objective involve killing since that what he was originally designed for." I mean Ted Bundy had purpose. Jack the Ripper had purpose. Why not just emulate those guys since he too was a "killer?" And one thought would be that like his one directive that states "no self-terminating" T-800's might have another directive that states "never jeopardize the existence of Skynet." And after John was killed he might have seen any other human as a potential "Miles Dyson" whose death could have a butterfly effect that causes Skynet to ultimately lose.
Also going on a nationwide murder spree could make himself so high profile that even if he didn't kill anyone "important" that when he is finally taken down by the US military (who would be called in to deal with him eventually) the ramifications of such a high profile robot killer could lead to laws being passed disallowing any future research on AI or robots period. Who knows what mathematical equations that he could have conducted in nano-seconds to decide the next best course of action.
First of all, thank you for clarifying that there's no reprogramming scenario for Carl, and how his "new mission" gets explained. Very much appreciated!
I agree with you, for all of the reasons that you state, that it would be problematic for him to go around just killing people after the mission to terminate John was completed. But, aside from the silliness that Skynet wouldn't program secondary objectives into its T-800's upon completion of the primary mission (as a-dev points out in the post following yours), I think there's still a logic problem with Carl's self-generated new objective.
If Carl continued to function in society, and eventually learned that Skynet's origin had been wiped out in 1992, why wouldn't he default to an objective that resets Skynet building blocks? Why would he just accept a future with no Skynet instead of using his historical data, a ton of advanced technology within him, and an AI more advanced than anything in the modern day to try restarting it?
In other words, I could see this movie having been a story about a Terminator actually being the one to start building the Skynet of the future in sort of a self-fulfilling loop. It would undo the achievement at the end of T2, but that was undone anyway by having this "Legion" plot. What I can't understand is the story logic that says a T-800 would find itself with no purpose, and rectify that by choosing to help individuals and families in trouble. That's a very hard sell for me.
As it stands I still think that Carl is closer to Uncle Bob than Uncle Bob is to the original Terminator. I can believe a Carl could exist after T2 but I still don't see how reprogramming the silent and brutal assassin in the original would suddenly have him hugging and doing high fives. In other words I think if you can accept T2 then DF is less of a stretch, even though it admittedly is still a stretch.
The way I rationalize it is that a T-800 is basically an adapting super-intellect that must constantly re-assess its approach whenever encountering obstacles. With its default programming to kill a target(s), it keeps re-assessing the behavior of its target in order to compensate and anticipate. The goal of that being to modify responses in order to achieve the most efficient capture and kill.
By being reprogrammed to serve and *protect* a human, the T-800 (Uncle Bob) learns and adapts to whatever behavior keeps John safest. If that means building trust and camaraderie with young John, then that's what "Uncle Bob" will learn to do. Having the Skynet programming scrubbed means having a clean slate for an advanced AI.
The T2 version of a more "human" T-800 makes sense to me because of that reprogramming. But Carl is essentially a T1 version, programmed by Skynet. His "human" behavior makes no sense without the T2 reprogramming context.
It's difficult to process, I agree. And I'm trying. There may be ways to rationalize it, many of which Khev has gone into, but what bugs me is that the film (still haven't seen it mind you) probably doesn't bother even trying to explain. Their attitude was 'remember T2? That happened again' and we're like ''NO! There was a very particular set of reasons why and how that happened in T2, this is different'' to which Dark Fate just shrugs it shoulders and says ''I'm just a movie''
The James Cameron of old would do better than that.
So the problem is how would a Terminator on Skynet programming end up like Carl when its general inclination seems to be to kill people. And on the basis of T2 this inclination doesn't really go away. It will only stop killing if ordered to by someone whose orders it is programmed to follow.
I have a theory, probably not touched on by Dark Fate nor any movie before and I'm only entertaining this idea as a sort of Devils advocate for what Dark Fate has done but here goes:
Terminators are only Terminators while they have a specific mission to follow. If they don't have a mission (or no longer have a mission because they completed it) they simply become walking AIs. In T1 the T-800 was programmed to kill Sarah Connor and it made decisions on who else to kill and not kill as it moved towards that goal. Maybe it deemed that the other people it killed would be threats to the mission in some way - either witnesses who would alert the authorities (like the gunshop owner who would obviously report that some guy had just stolen guns and ammunition from his shop) or the authorities themselves who were direct threats. However in the end this T-800 fails his mission and is destroyed.
In the case of Uncle Bob even a protection mission might entail killing for all anyone knows and so as long as he had a mission Uncle Bob was open to killing until John ordered that he not do it. But as we know we can't really look at Uncle Bob because his mission facilitated the learning curve he had in ways that the 1984 T-800's mission never would.
So now we have Carl the T-800. He kills who he was told to kill and seemingly had no other specific directives from Skynet - as silly seeming as that is. With no mission to be endangered he no longer considers it necessary to kill anyone and - this is where Khev's suggestion comes in - he shifts into pure infiltration mode. Killing random people endangers successful infiltration so he doesn't do it anymore.
He is still however a learning AI (since ''read-only'' is no longer a thing apparently) and his interactions with humans over a course of 30 years turn him into Carl.
Your logic (just like Khev's logic) for why a T-800 would stop killing is fine, and I can completely agree with it. But Carl understanding the inherent dangers of killing people beyond his target list doesn't mean that there's any logical reason to become a "domesticated" AI being. It is programmed as a Skynet soldier. To me, preserving Skynet objectives should be all this machine does. *Unless* it gets reprogrammed by someone, which Carl didn't.
The "likeable" terminator works great with Bob in T2 because there's a foundation for it. It can also work whenever those same foundational plot points are used. But it doesn't seem like Dark Fate bothers to incorporate any of that. Like you say, it seems that they just said "let's do the T2 formula for Arnold's Carl character," and didn't give a damn about why that doesn't work without the same context from T2.
Now hurry up and watch this movie so I can read your thoughts about how it plays on screen versus reading plot points.