

You are now blind and deaf like Helen Keller. Good luck, she figured it out, you’ll be fine.


You are now blind and deaf like Helen Keller. Good luck, she figured it out, you’ll be fine.
No, you are correct. Hinton began researching ReLUs in 2010 and his students Alex Krizhevsky and Ilya Sutskever used it to train a much deeper network (AlexNet) to win the 2012 ILSVRC. The reason AlexNet was so groundbreaking was because it brought all of the gradient optimization improvements (SGD with momentum as popularized by Schmidhuber, and dropout), better activation functions (ReLU), a deeper network (8 layers), supervised training on very large datasets (necessary to learn good general-purpose convolutional kernels), and GPU acceleration into a single approach.
NNs, and specifically CNNs, won out because they were able to create more expressive and superior image feature representations over the hand-crafted features of competing algorithms. The proof was in the vastly better performance, it was a major jump when the performance on the ILSVRC was becoming saturated. Nobody was making nearly +10% improvements on that challenge back then, it blew everybody out of the water and made NNs and deep learning impossible to ignore.
Edit: to accentuate the point about datasets and GPUs, the original AlexNet developers really struggled to train their model on the GPUs available at the time. The model was too big and they had to split it across two GPUs to make it work. They were some of the first researchers to train large CNNs with GPUs. Without large datasets like the ILSVRC they would not have been able to train good deep hierarchical convolutions, and without better GPUs they wouldn’t have been able to make AlexNet sufficiently large or deep. Training AlexNet on CPU only for ILSVRC was out of the question, it would have taken months of full-tilt, nonstop compute for a single training run. It was more than these two things, as detailed above, but removing those two barriers really allowed CNNs and deep learning to take off. Much of the underlying NN and optimization theory had been around for decades.
Before AlexNet, SVMs were the best algorithms around. LeNet was the only comparable success case for NNs back then, and it was largely seen as exclusively limited to MNIST digits because deep networks were too hard to train. People used HOG+SVM, SIFT, SURF, ORB, older Haar / Viola-Jones features, template matching, random forests, Hough Transforms, sliding windows, deformable parts models… so many techniques that were made obsolete once the first deep networks became viable.
The problem is your schooling was correct at the time, but the march of research progress eventually saw 1) the creation of large, million-scale supervised datasets (ImageNet) and 2) larger / faster GPUs with more on-card memory.
It was fact back in ~2010 that SVMs were superior to NNs in nearly every aspect.
Source: started a PhD on computer vision in 2012
Well, if you ever find yourself in Portland, OR I’ll buy you a beer. Nice of you to do that without any credit. Truly the lord’s work.
For those who haven’t read it:
Jazz hands, bitches!
That’s all you get.
Everybody in this thread needs to read Project Hail Mary by Andy Weir.
Exactly how would carbon dioxide get exchanged if the lungs are damaged?
It’ll 100% be chickcoal since the hand will be pushing Mach 5. Pretty sure the plasma will give it a nice sear.


Relax, show a willingness to learn and you’ll be ok.
I got my start working for university IT and made it all the way to a CS Ph.D. and into industry.
Edit: and get good sleep! It’s nearly midnight on the West coast, get as much good quality sleep as you can.
Why stop there, here is a much better picture for primal fear response: https://www.nbcnews.com/id/wbna27426933
It takes 8 minutes for the light to travel from the sun to Earth. Because light in a vacuum travels faster than anything, including information, we would not and could not know it had disappeared for 8 minutes. This means Earth would continue to follow its orbit around a non-existent sun for 8 minutes because the Sun’s gravity would still be acting on the Earth.
If it was nighttime, you wouldn’t notice the sudden lack of sunlight (other than if it was a full moon) but you’d almost certainly notice the change in gravity.
Edit: actually, you wouldn’t feel any difference in gravity or experience any change of acceleration. What you would experience is a very tiny vibration, of 1 million push notifications being sent to your phone from the other side of the planet.


My wife is a stay at home parent, she works way harder than I do on a daily basis. Whoever thinks parenting isn’t a full time job clearly has never had kids… or is full of shit if they have had kids.


<cough cough> single payer <cough cough>
Reinforcement learning is a machine learning (ML) technique (“AI” in layman terms) for optimizing neural networks and other types of non-linear models.
As far as ML math goes, this is fairly tame. It looks complicated, but is spelled out clearly in the paper. A lot of these kind of theoretical papers — things that would get published in Automatica — are going to lean very heavy on math.
Source: PhD in Computer Science with dissertation using neural networks.
It’s a lot of fucking work. If you enjoy hard work, learning about the latest advancements in your field, and can handle disappointment / criticism well, then it’s something to look into.


Good tip. You’d think that I would get lucky every once in a while.


I feel like I’m cursed or McD’s is taking a huge nose dive. I haven’t had a good, hot, not-soggy, salty McDonalds french fry in like 2-3 years. Every time I go, the fries are super gross. I’ve taken several road trips and the only consistency I’ve experienced is how terrible the fries have been.
Who’s to say this isn’t for animal health research?
They could be manufacturing prototypes or examples for apprentices. It’s also replicas were made for trade to demonstrate the shape to others, or tokens of appreciation, or adapted into toys / primitive dice. <shrug>