Posts

Showing posts from December 8, 2018

Single hidden layer with finite #neurons limitations

Image
up vote 0 down vote favorite I need to prove that MNN with one hidden layer, and finite number of neurons does not have compact support, i.e. the integral of the normal of f (network function) upon all R^d equal to infinity. It possible to show that finite taylor series does not have compact support and show that you can represent finite such network with finite taylor series. The question is if a more simpler proof exist? approximation-theory neural-networks share | cite | improve this question asked Nov 18 at 11:35 tnt1674 1 ...

Where are the extra coins?

Image
up vote 6 down vote favorite I am a manager of a coin casting foundry. We produce perfectly round coins with some (fixed) thickness and a diameter of exactly 1 inch. The working room is well-secured such that if any coin tries to leave the room, the alarm goes off. To allow workers to safely carry coins out of the room, a special container that is 10" * 10" square is available. A maximum of 1 layer of coins may be spread in the container and when it passes the door, the alarm won't go off. Normally, up to 100 coins can be carried with one such container. Yesterday we encountered a theft. A container (with coins) was taken out of the working room without triggering the alarm. But what confused me is that 106 coins were lost. I don't know how the thief took the extra 6 coins without setting off the alarm. Can you...