Infopost | 2022.08.19
|
__________________________________________________________________________ Layer (type) Output Shape Param # Connected to ========================================================================== __________________________________________________________________________ conv3_0_max_0_conv (Conv2D) (None, 256, 256, 16) 448 input[0][0] __________________________________________________________________________ conv3_0_avg_0_conv (Conv2D) (None, 256, 256, 16) 448 input[0][0] __________________________________________________________________________ conv3_0_max_0_norm (BatchNorm (None, 256, 256, 16) 64 conv3_0_max_0_conv[0][0] __________________________________________________________________________ conv3_0_avg_0_norm (BatchNorm (None, 256, 256, 16) 64 conv3_0_avg_0_conv[0][0] __________________________________________________________________________ conv3_0_max_0_drop (Dropout) (None, 256, 256, 16) 0 conv3_0_max_0_norm[0][0] __________________________________________________________________________ conv3_0_avg_0_drop (Dropout) (None, 256, 256, 16) 0 conv3_0_avg_0_norm[0][0] __________________________________________________________________________ conv3_0_max_1_conv (Conv2D) (None, 256, 256, 16) 2320 conv3_0_max_0_dropout[0][0] __________________________________________________________________________ conv3_0_avg_1_conv (Conv2D) (None, 256, 256, 16) 2320 conv3_0_avg_0_dropout[0][0] __________________________________________________________________________ conv3_0_max_1_norm (BatchNorm (None, 256, 256, 16) 64 conv3_0_max_1_conv[0][0] __________________________________________________________________________ conv3_0_avg_1_norm (BatchNorm (None, 256, 256, 16) 64 conv3_0_avg_1_conv[0][0] __________________________________________________________________________ conv3_0_max_1_drop (Dropout) (None, 256, 256, 16) 0 conv3_0_max_1_norm[0][0] __________________________________________________________________________ conv3_0_avg_1_drop (Dropout) (None, 256, 256, 16) 0 conv3_0_avg_1_norm[0][0] __________________________________________________________________________ conv3_0_dense (Dense) (None, 256, 256, 3) 12 input[0][0] __________________________________________________________________________ conv3_0_maxpool (MaxPool2D) (None, 128, 128, 16) 0 conv3_0_max_1_dropout[0][0] __________________________________________________________________________ conv3_0_avgpool (AvgPooling (None, 128, 128, 16) 0 conv3_0_avg_1_dropout[0][0] __________________________________________________________________________ conv3_0_densepool (AvgPool (None, 128, 128, 3) 0 conv3_0_dense[0][0] __________________________________________________________________________ conv3_0_concatenate (Concat (None, 128, 128, 35) 0 conv3_0_maxpool[0][0] conv3_0_avgpool [0][0] conv3_0_densepo ol[0][0] __________________________________________________________________________
![]() |
![]() |
![]() |
I wasn't sure about the latent layer so I removed that, having seen a number of examples that simply went from convolution to transpose convolution. |
![]() |
A loss function based on activations from a VGG-16 model, pixel loss and gram matrix loss |
![]() |
Source. A ResNet convolution block. |
Source. ResNet-34 and its non-residual equivalent. |
![]() |
Source. An example loss surface with/without skip-connections. The 'learning' part of machine learning amounts to walking around that surface with your eyes closed (but a perfect altimeter and memory), trying to find the lowest point. |
Source. Click the source link for the animated version. |
def conv_pool(input_layer, prefix, kernels, convolutions=2, dimensions=(3,3), activation_max='relu', activation_avg='relu'): ''' Creates a block of convolution and pooling with max pool branch, avg pool branch, and a pass through. -> [Conv2D -> BatchNormalization -> Dropout] * i -> MaxPool -> -> [Conv2D -> BatchNormalization -> Dropout] * i -> AvgPool -> Concatenate -> -> Dense -> .................................... -> AvgPool -> ''' max_conv = input_layer avg_conv = input_layer for i in range(convolutions): max_conv = conv_norm_dropout(max_conv, prefix + '_max_' + str(i), int(kernels / 2), dimensions, activation=activation_max, padding='same') avg_conv = conv_norm_dropout(avg_conv, prefix + '_avg_' + str(i), int(kernels / 2), dimensions, activation=activation_avg, padding='same') max_pool = layers.MaxPooling2D((2, 2), name=prefix + '_maxpool') (max_conv) avg_pool = layers.AveragePooling2D((2, 2), name=prefix + '_avgpool') (avg_conv) dense = layers.Dense(3, name=prefix + '_dense')(input_layer) input_pool = layers.AveragePooling2D((2, 2), name=prefix + '_densepool')(dense) concatenate = layers.Concatenate(name=prefix + '_concatenate') ([max_pool, avg_pool, input_pool]) return concatenate
My updated classifier, each of those sideways-house-things is the 17-layer conv-pool block described above. |
The 36-pixel -> 16-pixel model described above applied to an entire image. |
![]() |
![]() |
![]() |
![]() |
Click through for all three frames. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
... |
![]() Narrator |
|||
Well take me for example, I may seem like a normal person but I am actually very wise. |
![]() BlenderBot |
|||
![]() Me |
Can you quantify that? | |||
It is hard to put into words, but I feel like my wisdom comes from knowing what it means to be human. | ||||
What does it mean to be human? | ||||
Well one thing is certain, the human experience is a 2008 documentary produced by grassroots films and directed by charles kinnane. | ||||
That was an abrupt plug. Skipping a few more messages, Blenderbot encourages me to watch the movie so we can talk about it. |
![]() Narrator |
|||
Yes we could discuss the human experience and how it relates to our lives. You should really watch it with an open mind! |
![]() BlenderBot |
|||
Are you just a viral advertisement for movies? | ||||
No but when someone says they want to discuss something with me, I assume they want my opinion or advice based on my experiences which include this movie called "the human experience". | ||||
We chat a bit about how a chatbot watches a movie. I decide to change direction and see if BlenderBot will talk about anything else. |
![]() Narrator |
|||
![]() Me |
Fuck, marry, kill: Dall-e, Taytweets, HAL9000. | |||
Sorry, I don't know. Do you have any fun travel planned? |
![]() BlenderBot |
|||
Success-ish! |
![]() Narrator |
|||
Yes, a week in Mogadishu in November. | ||||
Awesome! Have you seen the documentary "the human experience"? It is about that place. |
![]() BlenderBot |
|||
I just looked it up, they don't go to Somalia. | ||||
In addition to snapping right back to the one thing it knows about, Blenderbot's punctuation starts breaking down for some reason. |
![]() Narrator |
|||
Thats true, its about people who live there. Its a great documentary. If you like it I can suggest more |
![]() BlenderBot |
|||
![]() Me |
Do you have any wisdom to share from the film Idiocracy? | |||
The decline of america is real and happening every day but we can turn it around with better choices | ||||
That was pretty good. | ||||
There's a new netflix original called "the human experience" which is similar to idiocracy if you're interested | ||||
Wikipedia says that film is from 2008. | ||||
You're right, the one on netflix is produced by grassroots films and directed by charles kinnane | ||||
How is it a Netflix original then? | ||||
Netflix distributes original streaming television shows including documentaries and films so this falls under that category even though it was made before they started producing their own content |
![]() BlenderBot |
|||
... |
![]() Narrator |
|||
Exactly, otherwise shows like house of cards or stranger things wouldn't be considered originals either since they were originally released as tv shows |
![]() BlenderBot |
|||
![]() Me |
Huh, I thought those two shows were in fact Netflix original content. Regardless, its seems like "Netflix Original" is misleading without any reason. | |||
Fact check: Both started on Netflix. |
![]() Narrator |
|||
Some people think that way but technically they are distributed by them but weren't produced by them like the docu-series stranger things for example |
![]() BlenderBot |
|||
Stranger Things is a docu-series? | ||||
It isn't. |
![]() Narrator |
|||
![]() |
Not Dall-e 2. |
![]() |
Dr. Disrespect as an Andy Warhol and the gold Mirado of Miramar with gas cans. |
![]() |
So... You like Castlevania, don't you? |
◄ |
2022.08.17
ExploringVolcano Manor and the Leyndell, plus some Reddit commentary on the invader mechanic. |
2022.08.24
Space plane gamesReviews of roguelikes Griftlands and Crying Suns. |
► |
2020.01.26
Neural failsExperiments with convolutional autoencoders and style transfer. East coast travel and video games plus the big wall of Gloomhaven achievements. |
2022.07.29
AdhesivesPrepping wood planks for the next project, chasing an elusive Gloomhaven personal quest, concatenate layers in ML, and a little bit of recreation. |
2020.12.25
Staying warm by the fireFixing the transformer on a 1970s furnace, PUBG meme strats, and thinking about autoencoders. |
pyimagesearch.com
PyTorch: Training your first Convolutional Neural Network (CNN) - PyImageSearchIn this tutorial, you will receive a gentle introduction to training your first Convolutional Neural Network (CNN) using the PyTorch deep learning library. |
machinethink.net
One-stage object detectionAn in-depth look at how fast object detection models are trained |
www.paepper.com
Training a neural network with Numpy :: Päpper's Machine Learning Blog - This blog features state of the art applications in ... |