add arrow-down arrow-left arrow-right arrow-up authorcheckmark clipboard combo comment delete discord dots drag-handle dropdown-arrow errorfacebook history inbox instagram issuelink lock markup-bbcode markup-html markup-pcpp markup-cyclingbuilder markup-plain-text markup-reddit menu pin radio-button save search settings share star-empty star-full star-half switch successtag twitch twitter user warningwattage weight youtube
ZFGeek

2 months ago

So...it's been a while since I have posted something like this.

Do you think there is a limit to machine learning, or do you think it could learn how to do almost anything? What would you want to see ML learn, either for the practical aspect, or for pure amusement?

Also, where could I start to learn basic AI and ML programming. I'd love to be able to do it.

Comments

  • 2 months ago
  • 6 points

Well machine learning depends on what data we can feed it to produce the desired outcome. So a broad response is, the limits of machine learning are things that we can't reasonably generate data for? EG: we can't use machine learning to create Call of Duty (yet). Call of Duty is created out of a multitude of creative efforts. We could probably use ML to create individual assets that would be appropriate for CoD. We could even probably use ML to create functional bits of code. Then a human could staple all the useful output together. But at this time there is no way to feed CoD as a dataset to a ML algo and have it reliably pump out CoD clones.

Once we get there we're going to see a lot of layoffs at Activision.

There's another answer: the limits on the application of machine learning. Already we are running into hiccups with machine learning in application. A couple examples:

Windows Defender uses ML. I've seen gamedevs complain about Windows Defender incorrectly flagging their games when trying to update themselves, and preventing the updates from occurring. They raise the alarm to Microsoft, and it gets "fixed", but the algo has already reinforced that flagging, and so every time a new update goes out it re-flags it. There doesn't seem to be a way to pull out this learned behavior, so they're just kind of patch-fixing as they go.

Amazon was using an AI to recommend applications to HR for candidates. It was fed a bunch of male resumes from past hires, so, it started downgrading female applicants and forwarding males... somehow nobody saw this problem coming lol.

Here is a github post that is tracking a lot of bad or potentially-bad AI implementations.


One avenue to start playing with ML is Unity. Here's a link to their ML blog. I haven't touched it in awhile but it seems to be maturing nicely.

  • 2 months ago
  • 5 points

I've come to think that the real advantage that humans have isn't in straight logic, it's in weird pattern matching. I'm a crap chess player because I can only work out the moves sequentially and I don't see the patterns. On the other hand, I'm a magic debugger and if I knew how I did it and could put it into a bottle and sell it, I would have retired as the richest man in the world 40 years ago.

I don't know how this translates to machine learning other than to say that I don't think we can program the really spectacular human stuff, because we still don't know how we do it ourselves.

[comment deleted]
  • 2 months ago
  • 2 points

Well technically our brains are biological computers. Looking how computers have advanced since ENIAC in 1946 to today it is hard to comprehend how big that gap is. Now we have the very early stages of quantum computers actually running and there is no telling how powerful they might get in another 100 years.

Sure there is still a large gap now where computers cannot have the same level as AI as a human but that gap has been shrinking. Well no human can process logic problems (like numbers) like a computer can but other things like kschendel stated is a real challenge for computers. Is it possible for computers to become that advanced? I think it is possible but not any time soon.

  • 2 months ago
  • 2 points

I think AI has a long way to go to do things without problems. I see it this way, while watching that defense contractor work for many years to get that robot dog to walk, now it can do pretty well, just wonder how long the battery lasts lol. Anyway look at a person, they take years to walk, then take years before they can do things precisely like jump/roll/balance/etc. Its years of learning how every muscle reacts, their own body weight, etc. Who wants a machine to learn for years, by then it may not be needed. Its not a formula or numbers you punch in, there are a ton of variables to make a human body do all the stuff athletes can do. Just where are all those self driving cars we were supposed to have now, oops. The creative process is likely even more complex. Far as getting a job in that field I bet its great, because all the amazons and googles don't want to hire people when a computer can do it. Right now they are fine with all the mistakes they make trying to do things with computers (take down all the videos about X, poof done). Then again maybe not mistakes lol.

  • 2 months ago
  • 2 points

look at a person, they take years to walk, then take years before they can do things precisely like jump/roll/balance/etc. Its years of learning how every muscle reacts, their own body weight, etc. Who wants a machine to learn for years, by then it may not be needed.

Keep in mind that the learning that machines such as MIT's dog robot Spot (which is now going on sale to select companies) can be replicated instantaneously. It doesn't have to be repeated like you are describing, and like humans do; one Spot did all the learning for all future Spot's to catch up to it in an instant. It can also be applied to similar robots.

Just where are all those self driving cars we were supposed to have now, oops.

Contemplate that less than ten years ago, self-driving cars were firmly in the realm of science fiction. Now we can do this. Now imagine this being developed further for another ten years off of this point, instead of starting from zero.

  • 2 months ago
  • 1 point

Replicated long as they make the exact same thing, same suppliers, same weight, etc.

https://en.wikipedia.org/wiki/History_of_self-driving_cars I'm not saying you are wrong but its been over 10 years.

  • 2 months ago
  • 2 points

Alright perhaps I wasn't specific enough; obviously the idea has been around for a long time. You could not go out and buy a car with autonomous driving features (unless you're going to cite cruise control and the like, lol) until recently. It is now something in the hands of the common person, that you can go "drive" yourself on the roads today. Not a DARPA experiment on a closed track, or some other highly contrived scenario. Finding a self driving car on the road next to you wasn't something that just happened normally ten years ago, but it is something today.

"We put a car on an electric track that propels it, so it's an autonomous car" like they were goofing around with in the 30s is obviously not even remotely the same concept.

  • 2 months ago
  • 2 points

udemy or pluralsite have both AI and ML courses.

  • 2 months ago
  • 1 point

There is always a limit because a machine is still a man-made product. But as time goes by, that limit widens because humans learn more and build on the technology that has already been laid down in the past. I still believe though that it should aid us instead of doing most of the job for us, just like software like these that make work for a particular job slightly more efficient, calculations and similar tasks, those kind of stuff. For entertainment, I wouldn't be interested in seeing movies made through AI for example, because it is one of the ways we express how it is to be human. I believe you can audit some courses in edX and Coursera. You can pay if you like so that you earn certification you can add to your resume in the future.

[comment deleted]
[comment deleted]
  • 2 months ago
  • 2 points

Nothing I've read (admittedly as a passive fan of ML/AI that has no professional training on the matter) indicates that the problem with emulating a human brain using AI techniques is hardware/power limited. Indeed, we already have computers that can process calculations radically faster than any human, and vastly more at once too. I think you have it precisely backwards. The hardware isn't the limit. It's that we still don't fully understand the human mind, so we don't even know how to begin replicating it in earnest.

[comment deleted]

Sort

add arrow-down arrow-left arrow-right arrow-up authorcheckmark clipboard combo comment delete discord dots drag-handle dropdown-arrow errorfacebook history inbox instagram issuelink lock markup-bbcode markup-html markup-pcpp markup-cyclingbuilder markup-plain-text markup-reddit menu pin radio-button save search settings share star-empty star-full star-half switch successtag twitch twitter user warningwattage weight youtube