I don't see anything stopping this trajectory other than the obvious energy limits, but if any group is going to take charge of whatever energy is available it will be the military and other govt depts.
The robots themselves will be limited by their own energy supplies and how much autonomy that provides each unit. Not much use if they only last 30 minutes before needing a battery swap etc. Compare that to a human military unit that can go a whole day on some rations.
Drones are looking useful and with swarm capability and AI will be able to overwhelm more expensive and sophisticated elements on the battlefield and beyond.
I would say robotic units are more or less just as easy to disable as human units if armor piercing rounds are used, electronic warfare, simple traps and obstacles of all kinds. People will find a way.
Of course, military units may use shielding to defend against this but I find the whole game to be really silly at this point. We're still sending human soldiers into a meat grinder with all kinds of mines, decades old tanks. They'll probably have to take a break again now when the weather makes it difficult to send in these vehicles. Meanwhile, these armies presume to have all kinds of exotic weapons on hand that would end these wars in minutes.
And here's the real reason people are concerned about the use of robots. They can and most likely will be turned on citizens the same way that heavily armed government thugs are let loose on protestors at rallies or whatever. I mean... sometimes you have to control a crowd, and sometimes it's all for show, but if robotic units are deployed people will lose any kind of respect for them (if there was any for human units that is) and resort to guerrilla warfare tactics blinding, trapping, disabling these beasts.
Even now, the govt can roll out homeland security squads or the National Guard to deal with dissidents of all kinds if they so wish. Human operatives follow illegal orders (unfortunately) but some may still have a soul in there somewhere and stand down if they feel what they're doing is wrong. Not so with robotic units. They will follow whatever programming is given to them without question. The idea that there will be some kind of restraining command or some kind of constitutional code embedded in the programming seems to me to be a little naive. If a govt wanted to use these systems to oppress the people there would be nothing to stop them except for dedicated pockets of resistance that knew what they were doing.
Interesting thoughts. One of the differences between science fiction writers and normal people is that we SF scribblers tend to assume that mundane normie arguments against something will be overcome in due time. In fact, an integral part of the SF canon is this set of laws:
"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
Any sufficiently advanced technology is indistinguishable from magic."
-- Arthur C. Clarke
And those of us who paid attention to our histories of science can find countless examples of that playing out. I recall arguing with somebody on Usenet who claimed we would never have cheap, hand-held e-book readers because battery technology would never permit that. In fact, we not only got those, but we got cheap, hand-held supercomputers (by the standards of that time) whose batteries permitted them to operate for days at a time.
Robotics is still in very early days, technologically speaking. As you say anent another subject, science, especially military science, will probably find a way.
OTOH, I do tend to give more serious consideration to your worries about tyrannical uses against the citizenry, because all governments are tyrannical at their hearts, and if the technology exists, they will use it. No more would the Chinese need to import troops from the far provinces to put down uprisings in the capitals. Robots enforcers don't care.
I've probably been misleading by featuring humanoid robots, because the more logical creations will be purpose-built. A crowd control 'bot might look more like a threshing machine than Robby, and be far more effective for it, equipped with everything from disabling sounds to mass slaughter mechanisms to be employed as needed - and far less vulnerable than what Boston Dynamics is building today - which, by the way, is ridiculously advanced over what the company was building five years ago.
Never forget the power of Moore's Law. Also keep in mind that there is no hothouse incubator for technology better than war itself. There is no guarantee that humans will always find a way, either, especially since at the moment, technology is advancing far more quickly than we are.
Yeah, I tend to be doomy sometimes. But then, that's what we do here downstream from the culture pools. We look at downsides.
I tend to think of the differences between Keith Laumer's BOLO tanks and Fred Saberhagen's Beserker spaceships. All depends on the programming, which depends on who is doing the programming I guess.
This has to do with our ambiguous relationship to becoming soldiers. On the one hand being a soldier carries a great deal of risks, you get shot at. On the other hand being a soldier potential carries great power, you decide the fate of nations, potentially even staging a coup.
Thus even though many people don't want to be soldiers themselves, they don't trust others, especially robots, to be soldiers either.
I don't see anything stopping this trajectory other than the obvious energy limits, but if any group is going to take charge of whatever energy is available it will be the military and other govt depts.
The robots themselves will be limited by their own energy supplies and how much autonomy that provides each unit. Not much use if they only last 30 minutes before needing a battery swap etc. Compare that to a human military unit that can go a whole day on some rations.
Drones are looking useful and with swarm capability and AI will be able to overwhelm more expensive and sophisticated elements on the battlefield and beyond.
I would say robotic units are more or less just as easy to disable as human units if armor piercing rounds are used, electronic warfare, simple traps and obstacles of all kinds. People will find a way.
Of course, military units may use shielding to defend against this but I find the whole game to be really silly at this point. We're still sending human soldiers into a meat grinder with all kinds of mines, decades old tanks. They'll probably have to take a break again now when the weather makes it difficult to send in these vehicles. Meanwhile, these armies presume to have all kinds of exotic weapons on hand that would end these wars in minutes.
And here's the real reason people are concerned about the use of robots. They can and most likely will be turned on citizens the same way that heavily armed government thugs are let loose on protestors at rallies or whatever. I mean... sometimes you have to control a crowd, and sometimes it's all for show, but if robotic units are deployed people will lose any kind of respect for them (if there was any for human units that is) and resort to guerrilla warfare tactics blinding, trapping, disabling these beasts.
Even now, the govt can roll out homeland security squads or the National Guard to deal with dissidents of all kinds if they so wish. Human operatives follow illegal orders (unfortunately) but some may still have a soul in there somewhere and stand down if they feel what they're doing is wrong. Not so with robotic units. They will follow whatever programming is given to them without question. The idea that there will be some kind of restraining command or some kind of constitutional code embedded in the programming seems to me to be a little naive. If a govt wanted to use these systems to oppress the people there would be nothing to stop them except for dedicated pockets of resistance that knew what they were doing.
Interesting thoughts. One of the differences between science fiction writers and normal people is that we SF scribblers tend to assume that mundane normie arguments against something will be overcome in due time. In fact, an integral part of the SF canon is this set of laws:
"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
Any sufficiently advanced technology is indistinguishable from magic."
-- Arthur C. Clarke
And those of us who paid attention to our histories of science can find countless examples of that playing out. I recall arguing with somebody on Usenet who claimed we would never have cheap, hand-held e-book readers because battery technology would never permit that. In fact, we not only got those, but we got cheap, hand-held supercomputers (by the standards of that time) whose batteries permitted them to operate for days at a time.
Robotics is still in very early days, technologically speaking. As you say anent another subject, science, especially military science, will probably find a way.
OTOH, I do tend to give more serious consideration to your worries about tyrannical uses against the citizenry, because all governments are tyrannical at their hearts, and if the technology exists, they will use it. No more would the Chinese need to import troops from the far provinces to put down uprisings in the capitals. Robots enforcers don't care.
I've probably been misleading by featuring humanoid robots, because the more logical creations will be purpose-built. A crowd control 'bot might look more like a threshing machine than Robby, and be far more effective for it, equipped with everything from disabling sounds to mass slaughter mechanisms to be employed as needed - and far less vulnerable than what Boston Dynamics is building today - which, by the way, is ridiculously advanced over what the company was building five years ago.
Never forget the power of Moore's Law. Also keep in mind that there is no hothouse incubator for technology better than war itself. There is no guarantee that humans will always find a way, either, especially since at the moment, technology is advancing far more quickly than we are.
Yeah, I tend to be doomy sometimes. But then, that's what we do here downstream from the culture pools. We look at downsides.
I tend to think of the differences between Keith Laumer's BOLO tanks and Fred Saberhagen's Beserker spaceships. All depends on the programming, which depends on who is doing the programming I guess.
For an interesting third take, check out Ian Banks' Culture series.
I never got round to the Culture series. Thanks for reminding me to check it out.
This has to do with our ambiguous relationship to becoming soldiers. On the one hand being a soldier carries a great deal of risks, you get shot at. On the other hand being a soldier potential carries great power, you decide the fate of nations, potentially even staging a coup.
Thus even though many people don't want to be soldiers themselves, they don't trust others, especially robots, to be soldiers either.
I would be hesitant to assert that "staging a coup" is very high on the list of reasons to become a soldier. In most peoples' minds, at least.
It's related to the reason people fear robot soldiers.
I see. OK, granted.