

I think the general idea of those against it is that they assume it is zero sum. Meaning, for everyone to be taken care of, the person must lose or have less.


I think the general idea of those against it is that they assume it is zero sum. Meaning, for everyone to be taken care of, the person must lose or have less.


Phones peaked around 2012. Now they are more cameras.
Folding phones only came out about 5 years ago, but I bought it used and true to the article my current folding phone is over 24 months with no plans on it being replaced.


That is not a good reason,
I didn’t say it was a good reason. The OP said there was no reason. There is a reason.


There’s also no reason that it’s a software handle.
There is a reason. It would be easy to argue its not worth it though.
When you pull the door handle the glass window retracts below the rubber weather stripping, then the door latch unlocks and you open the door. When you close the door and it latches, the window moves back up into the weather stripping creating a tight seal.


It’s 2 years. Not permanent.
Well, we may have a better shot at getting it permenent in 2 years under *barf* president Vance.


i would assume an actual competent person would be able to answer them immediately and confidently,
People aren’t always able to regurgitate encyclopedic knowledge in interviews. Sure some can, but many have anxiety about interviews in general, or stuff going on in their lives which can make them not the sharpest when hit with a rando question like this. There are some absolutely brilliant people I’ve hired that would fail miserably if this was how they were measured.
Some people work better with scenario based questions instead of bulleted memorized answers. Honestly, I’d much rather have a candidate that knows the concept being discussed even if they can’t remember the exact name of a term or the name of a flag they’d need to include when issuing a command. Those last things can be googled in the moment. Conceptual knowledge and understanding is much more important to me than wrote memorization.
someone reading an LLM prompt is probably sounds like they’re reading from a script even if the answers arent wrong
Well, thats what I experienced from my original post, but I’m not sure it will always be that. Someone more clever could take the answer from the LLM and paraphrase it, or put it in their own words and sound competent.


I’m not following, wouldn’t an LLM be able to easily answer that one?


From the other side, hiring competent people has gotten much harder with AI in the hands of people. Its making them dumb.
A coworker and I were interviewing someone for a technical role over a video meeting that we did NOT get through our network. His answers were strangely generic. We’d ask him a direct question about a technology or a software tool and the answer would come back like a sales brochure. I message my co-worker on the side about this strangeness, and he said “We’re not hiring this guy. Watch his eyes. Ever time you ask a question, he’s reading off the bottom of his screen.” My coworker was right. I saw it immediately after he pointed it out. We were only 4 minutes into the interview and we already knew we weren’t hiring this guy. I learned later about LLMs that you can run while being interviewed that will answer questions your in real time.
Another one happened within 48 hours of that interview. Someone that had been hired was on a team with me. An error came up in a software tool that we are all supposed to be experts on. I had a pretty good idea what the issue was from the error message text. This other team member posted into our chat what ChatGPT had thought of the error. In the first sentence of the ChatGPT message I immediately could tell that it was the wrong path. It referenced different methods our tool doesn’t even use.
To translate it with an analogy, assume we’re baking a cake and it came out too sour. The ChatGPT message said essentially “this happens when you put too much lemon juice in. Bake the cake and use less lemon juice next time” Sure, that would be a reasonably decent answer…except our cake had no lemon juice in it. So obviously any suggestions to fix our situation with altering the amount of lemon juice is completely wrong. This team member, presented this message and said “I think we should follow this instruction”. I was completely confused because he’s supposed to be an expert on our tool like I am, and he didn’t even pause to consider what ChatGPT said before he accepted it as fact. It would be one thing to plug the error message into ChatGPT to see what it said, but to then take that output and recommend following it without any critical thinking was insane to me.
AI can be a useful tool, but it can’t be a complete substitute for thinking on your own as people are using it as today. AI is making people stupid.
This is why I generally hire from inside my network or from referrals of those I know. Its so hard to find a qualified worker among all the other unqualified workers all applying at the same time. I know there are great workers not in my network, I just have no way to find them with the time and resources I have available to me.
As Ronald Wright said , A Short History of Progress (2004): “John Steinbeck once said that socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.” (note, John Steinbeck never said that)