Storypost | 2020.11.29

Neural style transfer Valentino Rossi Ducati Laguan Seca Ralph Steadman tiling
Thanksgiving

Thanksgiving lunch The Torrey Pines Lodge turkey wine

Since it was just the two-ish of us, Jes and I went to the Lodge for Thanksgiving lunch.

Pegboard backpack organizer

Clearing out the upstairs office meant there was some spare pegboard floating around. I replaced a crowded set of coat hooks with the more functional but less attractive paneling.

Olympus Has Fallen HBO dog weimaraner media room

After several instances of incorrect items (in a time crunch) and rather disappointing streaming availability, I kicked Prime to the curb. It's covid, so I replaced vanilla Prime HBO with HBO Max.
The wheel


Wheel trading (bouncing between cash-secured puts and covered calls) has been a fun hobby with better profit potential than GME YOLOs. The story of the above:
PUBG un-RIP(?)


A few months back I eulogized PUBG. Turns out, covid combined with a lack of attractive alternatives brought the squad back. Not much has changed in the game, we've learned to live with the bots and appreciate the game's marginally better stability.
Neural style transfer, but TensorFlow this time

I meandered back into neural style transfer over football. I'd last left it with my DL4J experimentation, which notably didn't have hardware acceleration. Since the Keras sample code was easy to hit 'go' on, I tried that.

Neural style transfer algorithm Valentino Rossi Ducati Laguna Seca Ralph Steadman content style

For this exercise I went with:

Same same


Naturally, a 1060 is much quicker than a Core i7. But I was treated to largely the same results as before.




The algorithm tends toward wavy lines in areas of low detail and seems to produce similar images regardless of style. Unlike the DL4J code that required VGG19-size (224x224) images, this one does scaling, for better or worse.

Scaling up

It didn't take long to modify the example to do tiles of a full res image. The tile boundaries are obvious and could be fixed inelegantly with photoshop and elegantly with feathering and staggering. I also found a blog post (whose url I have since lost) that recommended a few things:
Deep learning neural style transfer algorithm sample Maison Pour Erotomane example

As long as I was tiling the content/output, it made sense to apply another lesson and sample various portions of the style image. That is, the naive method is to take two 224x224 images and combine them, so you have crop/scale both content and style images to a small square. Scaling down means that whatever style you have quickly becomes lost - e.g. a 10x13 brushstroke make be condensed to 2x3. Cropping the style image means you're only looking at a portion of the art - so your Maison Pour Erotomane may be all car and no horse.

So my next revision applied a random 224x224 style square to each tile of the content.

Implementing an outer loop meant working out something that I hadn't yet done - loading incremental output. I think I accomplished this by initializing the initial guess field of fmin_l_bfgs_b() to the last value rather than the content. The algorithm still computes content/style loss from the original images, but can now be checkpointed.


Each progressive iteration shows more and more artistic stylization that quickly becomes pretty abstract. You also see the hard tile boundaries soften as I introduced a small translation to each input value.

You can also see the entire image bounce between styles as the random sample from the Steadman image changes from one section to the next. This, of course means progressive iterations move between 'substyles' of the style image. Numerically, it means trying to find the global minimum on a moving target. From a graphic art perspective, this creates a set of image variations that can be manually or automatically blended to a final product.

Style match

Neural style transfer Rossi Ducati Laguna Seca Ralph Steadman tiling style match

The thought occurred to me that there are tiles in the style image that might be more appropriate for a given content tile. Heuristics come to mind, i.e. selecting a tile from the style image based on its fitness for the content; matching color, matching contrast, etc. Ultimately though, it seemed like the easiest and best(?) approach would be to let loss make that determination. The optimization of each tile would sample a number of sources and only retain best results.

Neural style transfer Ralph Steadman tiling Hunter Thompson Ducati

In the spirit of increasing the number of 'style matches' for a given square of content, I added a couple more Steadmans to the random sampling. The output looked like this:

   Processing tile: ( 0 ,  0 ) at ( 16 ,  0 )
      Style source  0
      Loss:  203719060.0
      Using result
      Style source  1
      Loss:  200921020.0
      Using result
      Style source  2
      Loss:  164412600.0
      Using result
      Style source  3
      Loss:  197183180.0
      Style source  4
      Loss:  155275980.0
      Using result
   Processing tile: ( 0 ,  1 ) at ( 16 ,  238 )
      Style source  0
      Loss:  249554060.0
      Using result
      Style source  1
      Loss:  214680510.0
      Using result
      Style source  2
      Loss:  179050400.0
      Using result
      Style source  3
      Loss:  198226380.0
      Style source  4
      Loss:  183831140.0

Neural style transfer Valentino Rossi Ducati Laguna Seca Ralph Steadman tiling

And like that, football was over. I ran the code enough to see a different-but-consistent style applied to my image set. The output is ultimately beholden to iteration count and content/style weight hyperparameters.

Deep learning neural style transfer algorithm Broncos content style

Deep learning neural style transfer algorithm Broncos Hinton handoff tiling

The code, as it is right now:

'''Neural style transfer with Keras.  Modified as an experiment.
# References
    - [A Neural Algorithm of Artistic Style](http://arxiv.org/abs/1508.
    06576)
'''

from __future__ import print_function
from keras.preprocessing.image import load_img, save_img, img_to_array,
array_to_img
import numpy as np
from scipy.optimize import fmin_l_bfgs_b
import time
import random
import glob
import argparse
import os.path
from os import path

from keras.applications import vgg19
from keras import backend as K


def preprocess_image(img):
    img = img_to_array(img)
    img = np.expand_dims(img, axis=0)
    img = vgg19.preprocess_input(img)
    return img    


def deprocess_image(x):
    if K.image_data_format() == 'channels_first':
        x = x.reshape((3, side_length, side_length))
        x = x.transpose((1, 2, 0))
    else:
        x = x.reshape((side_length, side_length, 3))
    # Remove zero-center by mean pixel
    x[:, :, 0] += 103.939
    x[:, :, 1] += 116.779
    x[:, :, 2] += 123.68
    # 'BGR'->'RGB'
    x = x[:, :, ::-1]
    x = np.clip(x, 0, 255).astype('uint8')
    return x


def gram_matrix(x):
    assert K.ndim(x) == 3
    if K.image_data_format() == 'channels_first':
        features = K.batch_flatten(x)
    else:
        features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
    gram = K.dot(features, K.transpose(features))
    return gram

# the "style loss" is designed to maintain
# the style of the reference image in the generated image.
# It is based on the gram matrices (which capture style) of
# feature maps from the style reference image
# and from the generated image


def style_loss(style, combination):
    assert K.ndim(style) == 3
    assert K.ndim(combination) == 3
    S = gram_matrix(style)
    C = gram_matrix(combination)
    channels = 3
    size = side_length * side_length
    return K.sum(K.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2))


# an auxiliary loss function
# designed to maintain the "content" of the
# base image in the generated image

def content_loss(base, combination):
    return K.sum(K.square(combination - base))


# edge detector - sum edges will be how busy it will look
def total_variation_loss(x):
    assert K.ndim(x) == 4
    if K.image_data_format() == 'channels_first':
        a = K.square(
            x[:, :, :side_length - 1, :side_length - 1] - x[:, :, 1:, :
            side_length - 1])
        b = K.square(
            x[:, :, :side_length - 1, :side_length - 1] - x[:, :, :
            side_length - 1, 1:])
    else:
        a = K.square(
            x[:, :side_length - 1, :side_length - 1, :] - x[:, 1:, :
            side_length - 1, :])
        b = K.square(
            x[:, :side_length - 1, :side_length - 1, :] - x[:, :
            side_length - 1, 1:, :])

    return K.sum(K.pow(a + b, 1.25))

def fidelity_loss(x, y):
    assert K.ndim(x) == 3
    assert K.ndim(y) == 3
    if K.image_data_format() == 'channels_first':
        x_g = K.sum(x[:3, :, :])
        y_g = K.sum(y[:3, :, :])
        return K.square(x_g - y_g)
    else:
        x_g = K.sum(x[:, :, :3])
        y_g = K.sum(y[:, :, :3])
        return K.square(x_g - y_g)

    # Experiment with luminance
    #if K.image_data_format() == 'channels_first':
    #    x_g = np.dot(x[0, :3, :, :], [0.2989, 0.5870, 0.1140])
    #    y_g = np.dot(y[0, :3, :, :], [0.2989, 0.5870, 0.1140])
    #    return K.square(x_g - y_g)
    #else:
    #    x_g = np.dot(x[0, :, :, :3], [0.2989, 0.5870, 0.1140])
    #    y_g = np.dot(y[0, :, :, :3], [0.2989, 0.5870, 0.1140])
    #    return K.square(x_g - y_g)


# Returns style layers - this is the default (all), I experimented with
dropping random ones
def get_random_style_layers():
    return ['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1',
    'block5_conv1']


def get_random_crop(image, width, height):
    '''Returns a random subimage with the given width and height.'''    
    
    # Crop to width and height, if specified.
    x = 0
    if width is not None and width != image.width:
        x = int(random.randrange(image.width - width))
    y = 0
    if height is not None and height != image.height:
        y = int(random.randrange(image.height - height))

    if x != 0 or y != 0:
        box = (x, y, x + width, y + width)
        image = image.crop(box)
    return image

def eval_loss_and_grads(x):
    if K.image_data_format() == 'channels_first':
        x = x.reshape((1, 3, side_length, side_length))
    else:
        x = x.reshape((1, side_length, side_length, 3))
    outs = f_outputs([x])
    loss_value = outs[0]
    if len(outs[1:]) == 1:
        grad_values = outs[1].flatten().astype('float64')
    else:
        grad_values = np.array(outs[1:]).flatten().astype('float64')
    return loss_value, grad_values

def random_style_tile():
    image = style_images[random.randrange(len(style_images))]
    return get_random_crop(image, side_length, side_length)


class Evaluator(object):

    def __init__(self):
        self.loss_value = None
        self.grads_values = None

    def loss(self, x):
        assert self.loss_value is None
        loss_value, grad_values = eval_loss_and_grads(x)
        self.loss_value = loss_value
        self.grad_values = grad_values
        return self.loss_value

    def grads(self, x):
        assert self.loss_value is not None
        grad_values = np.copy(self.grad_values)
        self.loss_value = None
        self.grad_values = None
        return grad_values

side_length = 224           # VGG19 is 224x224x3

# Iteration hyperparameters (modify these)
iterations_per_image = 10   # Number of image traversals
samples_per_tile = 3        # Number of style tiles to try per iteration
iterations_per_sample = 10  # Number of style transfer iterations per
sample

# Use all png files from the style subdirectory, random 224 squares will
be used
# from these files to perform style transfer - so image size should be
# approximately the dimensions of the content.
style_files = glob.glob('style/*.png')

file_id = random.randrange(696969)

# Load content.png, the size of this image will significantly impact run
time.
content_image = load_img('content.png')
content_width, content_height = content_image.size

# Load style images.
style_images = []
for style_name in style_files:
    style_image = load_img(style_name)
    style_images.append(style_image)

# If this setup was run previously, use the existing output image.
if (os.path.isfile('last.png')):
    output_image = load_img('last.png')
else:
    output_image = load_img('content.png')

# Compute the tile count/step size.  There will be overlap and it should
# be a good thing.
x_tiles = int(content_width / side_length)
if (content_width % side_length != 0):
    x_tiles += 1
x_step = int(content_width / x_tiles)

y_tiles = int(content_height / side_length)
if (content_height % side_length != 0):
    y_tiles += 1
y_step = int(content_height / y_tiles)

feature_layers = get_random_style_layers()
print('Style layers: ' + str(feature_layers))

# Number of times to cover the entire image (optimize each tile)
for image_iteration in range(iterations_per_image):
    print('Iteration: ', image_iteration)

    # Randomize hyperparameters because I don't know what good values are.
    # Modify these/make them fixed.
    total_variation_weight = random.uniform(0.001, 0.5)
    style_weight = random.uniform(0.001, 0.5)
    content_weight = random.uniform(0.001, 0.5)
    fidelity_weight = random.uniform(0.001, 0.5)

    # Bump the tile a random value to do a smoother stitch.
    x_jitter = int(random.randrange(-16, 16))
    y_jitter = int(random.randrange(-16, 16))

    # Iterate over each image tile.
    for x_tile in range(x_tiles):
        for y_tile in range (y_tiles):
            x_start = (x_tile * x_step) + x_jitter
            y_start = (y_tile * y_step) + y_jitter

            # Post-jitter boundary check.
            if (x_start < 0):
                x_start = 0
            if (y_start < 0):
                y_start = 0
            if (x_start + side_length > content_width):
                x_start = content_width - side_length - random.randrange(1,
                16)
            if (y_start + side_length > content_height):
                y_start = content_height - side_length - random.
                randrange(1, 16)

            print('   Processing tile: (', x_tile, ', ', str(y_tile), ')
            at (', x_start, ', ', y_start, ')')

            box = (x_start, y_start, x_start + side_length, y_start +
            side_length)
            tile_content = content_image.crop(box)
            base_image = K.variable(preprocess_image(tile_content))

            best_loss = -1

            # For each tile, sample the random portions of the style
            image(s) and choose
            # the best of the lot.
            for i in range(samples_per_tile):
                print('      Trying tile ', i)
                tile_style = random_style_tile()
                
                evaluator = Evaluator()
                tile_output = output_image.crop(box)

                # run scipy-based optimization (L-BFGS) over the pixels of
                the generated image
                # so as to minimize the neural style loss
                x = preprocess_image(tile_output)

                style_reference_image = K.
                variable(preprocess_image(tile_style))

                if K.image_data_format() == 'channels_first':
                    combination_image = K.placeholder((1, 3, side_length,
                    side_length))
                else:
                    combination_image = K.placeholder((1, side_length,
                    side_length, 3))
                
                # combine the 3 images into a single Keras tensor
                input_tensor = K.concatenate([base_image,
                                              style_reference_image,
                                              combination_image], axis=0)

                # Reinitialize VGG19.  There's probably a way to do this
                once and
                # improve performance.
                model = vgg19.VGG19(input_tensor=input_tensor, weights=
                'imagenet', include_top=False)

                outputs_dict = dict([(layer.name, layer.output) for layer
                in model.layers])

                # combine these loss functions into a single scalar
                loss = K.variable(0.0)
                layer_features = outputs_dict['block5_conv2']
                base_image_features = layer_features[0, :, :, :]
                combination_features = layer_features[2, :, :, :]
                loss = loss + content_weight *
                content_loss(base_image_features, combination_features)

                for layer_name in feature_layers:
                    layer_features = outputs_dict[layer_name]
                    style_reference_features = layer_features[1, :, :, :]
                    combination_features = layer_features[2, :, :, :]
                    loss = loss + (style_weight / len(feature_layers)) *
                    style_loss(style_reference_features,
                    combination_features)
                loss = loss + total_variation_weight *
                total_variation_loss(combination_image)

                loss = loss + fidelity_weight *
                fidelity_loss(combination_features, base_image_features)

                # get the gradients of the generated image wrt the loss
                grads = K.gradients(loss, combination_image)

                outputs = [loss]
                if isinstance(grads, (list, tuple)):
                    outputs += grads
                else:
                    outputs.append(grads)

                f_outputs = K.function([combination_image], outputs)

                # With this content/style combo, use the style transfer
                algorithm.
                for i in range(iterations_per_sample):
                    x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.
                    flatten(),
                                                     fprime=evaluator.
                                                     grads, maxfun=20)
                    print('        Loss[', i, ']: ', min_val)

                # If this is the first style sample or it's better than
                the last
                # use this output.
                if (best_loss == -1 or min_val < best_loss):
                    print('      Updating from prev: ', min_val, ' < ',
                    best_loss)
                    img = array_to_img(deprocess_image(x.copy()))
                    output_image.paste(img, (x_start, y_start))
                    best_loss = min_val

                # Reset the back end
                K.clear_session()
                        
    fname = 'output_' + str(file_id) + '_%d.png' % image_iteration
    print('   ->', fname)
    save_img(fname, output_image)
    save_img('last.png', output_image)

Fantasy

Deep learning neural style transfer algorithm Broncos Hinton handoff

25-8 on aggregate going into this wild weekend of Jacksonless Ravens, QBless Donkeys, and a Thursday-Tuesday weekend.

Week d'san andreas da bears
- Medieval Gridiron -
Covid-20
- Password is Taco -
Dominicas
- Siren -
1 Danville Isotopes
110.8 - 72.5 W (1-0)
Black Cat Cowboys
155.66 - 78.36 W (1-0)
TeamNeverSkipLegDay
136.24 - 107.50 W (1-0)
2 Screaming Goat Battering Rams
119.9 - 105.9 W (2-0)
[Random UTF characters resembling an EQ]
115.50 - 115.74 L (1-1)
Dem' Arby's Boyz
94.28 - 102.02 L (1-1)
3 Nogales Chicle
106.5 - 117.8 L (2-1)
Circle the Wagons
100.42 - 90.02 W (2-1)
JoeExotic'sPrisonOil
127.90 - 69.70 W (2-1)
4 Britons Longbowmen
122.9 - 105.1 W (3-1)
Staying at Mahomes
123.28 - 72.90 W (3-1)
Daaaaaaaang
138.10 - 108.00 W (3-1)
5 Toronto Tanto
105.0 - 108.2 L (3-2)
Robocop's Posse
111.32 - 134.26 L (3-2)
Alpha Males
86.20 - 76.12 W (4-1)
6 Only Those Who Stand
108.2 - 66.7 W (4-2)
KickAssGreenNinja
65.10 - 84.02 L (3-3)
SlideCode #Jab
71.60 - 53.32 W (5-1)
7 San Francisco Seduction
121.7 - 126.4 L (4-3)
Ma ma ma my Corona
118.22 - 84.20 W (4-3)
G's Unit
109.20 - 92.46 W (6-1)
8 LA Boiling Hot Tar
116.2 - 59.4 W (5-3)
Kamaravirus
118.34 - 109.94 W (5-3)
WeaponX
113.14 - 85.40 W (7-1)
9 SD The Rapier
135.0 - 90.8 W (6-3)
C. UNONEUVE
117.80 - 90.16 W (6-3)
Chu Fast Chu Furious
128.28 - 59.06 W (8-1)
10 West Grove Wankers
72.9 - 122.8 L (6-4)
Pug Runners
98.90 - 77.46 W (7-3)
NY Giants LARP
75.24 - 75.06 W (9-1)
11 SF Lokovirus
127.9 - 87.1 W (7-4)
Bravo Zulus
116.34 - 45.50 W (8-3)
HitMeBradyOneMoTime
107.42 - 89.22 W (10-1)
12 Danville Isotopes
154.7 - 64.1*
Forget the Titans
57.04 - 99.74*
TeamNeverSkipLegDay
122.68- 91.12*

Dog weimaraner label maker labeling indignant



Storypost | 2020.11.15

Borderlands 3 psycho Krieg DLC artistic bullet head
Among Us

Video games steam Among Us

Among Us memes are on the decline, but they got a little bump with the votingest election in US history. The lolbaters haven't played it in a couple of months, but we did have a few rousing sessions of griefing and dysfunction.
Decision 2020

Meme Trump ejected

Between the polarization, the pandemic, the contested election, and the personalities involved, the pillars of democracy have been slathered in a thick coat of meta. It's amusing, anyway:

Election 2020 Dan Price tweet covid experience Election 2020 meme copium Trump Election 2020 FEMA meme
Election 2020 Pfizer stock Four Seasons Total Landscaping tweet hungry cap wolf Election 2020 meme Kanye president Election 2020 Ben Garrison planndemic comic Election 2020 meme Trump changes name to Joe Biden CNN
Election 2020 surfing meme sponger Election 2020 meme Trump the art of the L Election 2020 comic statue of liberty mask slingshot Trump

Even Kevin came in with a dad joke:

kevin
kevin
Why can't Trump stay in the White House?
Because it is For Biden!
Indecision 2020

2020 election I voted

So in September we talked about the signals that were being put up for an election that wouldn't be decided November 3rd. A couple weeks later, was it alarmism or are we basically Belarus? Let's look at each point of the strategy:

"Convince people that the electoral system and/or certain types of votes are untrustworthy."

Contested election Trump tweet most secure rigged election Pam Fessler

Dead people voting, thrown out ballots, the illegitimacy of ballots *counted* (not cast) after election day. The talking points started months ago and have been repeated throughout the process.

"Send armed operatives appearing as official as possible to polling places to confront voters."

Contested election armed news Charlotte NC

There wasn't much voter intimidation and it was dealt with swiftly.

Contested election article 2020 Philadelphia vote counting

In similarly small measure, there were Brooks Brothers Riot reduxes.

"Audit ballots. Find those hanging chads."

Contested election 2020 Arizona open carry

Post-election litigation has shone a light on the fact that sanctioned poll watching (observation of the counting) has been an important part of this election. There hasn't been much buzz about mail-in ballots being tossed on technicalities, but 'Sharpiegate' has been the controversy of Decision 2020.

"Contest results, demand recounts, take legal action."

Contested election down ballots

The Trump campaign has contested the results in all of the battleground states - Michigan, Arizona, Nevada, Pennsylvania, and Georgia. The states and judiciary seem to have been prepared and dealt with the first round of litigation swiftly.

TBD

Contested election faithless electors Mark Levin tweet

So, are we Belarus?

I won this election by a lot 2020 Trump tweet

If it's done its job, EDO has tainted every battleground such that it can be litigated in court and in public opinion.

The post-November 03 script seems to have followed the one put forth by The Atlantic a couple months ago, but with substantially less vigor than you would expect from a group that makes itself so visible.

Zerohedge comments contested election

As I sometimes do, I dipped into the pond scum of zh comments but was rewarded with some mutual commiseration. It seems like the blueprint was there, but was not executed on the scale required to overturn the electoral system.

Election 2020 Trump protests flag

That's not to say that every lawsuit is dead and there aren't believers in Sharpiegate and the like. But with even Fox News calling the election done, it's hard to imaging state legislatures going the faithless elector route.
Nioh fin

Nioh final castle video game

J and I wrapped up our Nioh journey. There were allies and enemies, samurai and monsters, community player ghosts and spirit animals.

Nioh cat kill me to destroy them all Nioh final boss dragon house Nioh burning house Nioh fire bird
Nioh frog boss Nioh spirit Nioh ice lady Nioh musket soldiers Nioh skeleton monster boss
Nioh trip to the spa Nioh final boss house dragons Nioh swamp Nioh boss area many graves
BL3 Director's Cut + Psycho Krieg and the Fantastic Fustercluck

Borderlands 3 Krieg DLC psycho mask giant

The whole BL2 crew has finally all shown up with the Krieg DLC and Axton/Sal-hosted Arms Race game show/arena mode.

Arms Race

Borderlands 3 roguelike

Arms Race crosses roguelike elements with a PvE battle royale in the style of Running Man or Moxxi's Underdome. Players start with nothing and work their way up from white-tier weapons by roaming the map and looting. A playthrough ends with - in Borderlands style - a boss battle that is understandably easier with better loot. Unlike most Borderlands gameplay, the map is surrounded by a shrinking BR-style bubble that eventually converges on the boss.

Coming from a max level, mayhem 10 playthrough, Arms Races is a refreshing return to weapons-based play where you don't need a minmaxed build to drop even the squishiest of enemies.

Borderlands 3 roguelike

The map features a handful of loot caches and enemy clusters that are probably fun to check out for the first time. With a fairly static map, the game mode seems to primarily offer some gameplay novelty as well as a source for (unique?) loot. Afaik there isn't escalating difficulty or mapwide modifiers, making it a little less like a Moxxi DLC and more like a large arena.

Of course, seeing Sal and Axton play game show commentators is kind of fun and features some quality BL one-liners:

Like I told my ex-wife, if you'd been paying attention, it wouldn't have been a surprise.

Fantastic Fusterclick

Borderlands 3 Krieg DLC giant screenshot Enter the Psychoscape

In a story reminiscent of the Pre-Sequel's Claptastic Voyage, the Krieg DLC dives into the mind of the troubled vault hunter. The exploration of the subconscious allows for some trippy visual effects and nerdy references. As usual, the DLC isn't light on plot and character development - this one just has a lot more Krieg-isms.

Borderlands 3 Evil Lilith energy balls Borderlands 3 Psycho Krieg Fustercluck MC Escher room
Borderlands 3 Evil Lilith throne Borderlands 3 good and evil Krieg Dont Call it a Rorschach Borderlands 3 Psycho Krieg bandit mirror
Borderlands 3 sniping gun loader Borderlands 3 Psycho Krieg Locomobius Borderlands 3 Psycho Krieg nowhere loader Maya Borderlands 3 Psycho Krieg vault
RoR meme runs

Risk of Rain 2 Alloy Worship Unit Artificer

While we haven't put together a god run on normal difficulty, we're having fun with the occasional drizzle sequence run. This opens the door for extra-hard Mithrix battles and - at last - we got a N'Kuhana's Retort.

Risk of Rain 2 purple bubble Risk of Rain 2 Mithrix many many beetle guards Risk of Rain 2 Nkuhanas Retort Risk of Rain 2 ice wall gold coast lighting
Home life

Pliny the Elder with some stuff

Not much on home front. I guess there's fantasy and wheel trades. Having re-watched Contagion and 12 Monkeys, I'm looking for a movie to complete the covid trilogy.

Week d'san andreas da bears
- Medieval Gridiron -
Covid-20
- Password is Taco -
Dominicas
- Siren -
1 Danville Isotopes
110.8 - 72.5 W (1-0)
Black Cat Cowboys
155.66 - 78.36 W (1-0)
TeamNeverSkipLegDay
136.24 - 107.50 W (1-0)
2 Screaming Goat Battering Rams
119.9 - 105.9 W (2-0)
[Random UTF characters resembling an EQ]
115.50 - 115.74 L (1-1)
Dem' Arby's Boyz
94.28 - 102.02 L (1-1)
3 Nogales Chicle
106.5 - 117.8 L (2-1)
Circle the Wagons
100.42 - 90.02 W (2-1)
JoeExotic'sPrisonOil
127.90 - 69.70 W (2-1)
4 Britons Longbowmen
122.9 - 105.1 W (3-1)
Staying at Mahomes
123.28 - 72.90 W (3-1)
Daaaaaaaang
138.10 - 108.00 W (3-1)
5 Toronto Tanto
105.0 - 108.2 L (3-2)
Robocop's Posse
111.32 - 134.26 L (3-2)
Alpha Males
86.20 - 76.12 W (4-1)
6 Only Those Who Stand
108.2 - 66.7 W (4-2)
KickAssGreenNinja
65.10 - 84.02 L (3-3)
SlideCode #Jab
71.60 - 53.32 W (5-1)
7 San Francisco Seduction
121.7 - 126.4 L (4-3)
Ma ma ma my Corona
118.22 - 84.20 W (4-3)
G's Unit
109.20 - 92.46 W (6-1)
8 LA Boiling Hot Tar
116.2 - 59.4 W (5-3)
Kamaravirus
118.34 - 109.94 W (5-3)
WeaponX
113.14 - 85.40 W (7-1)
9 SD The Rapier
135.0 - 90.8 W (6-3)
C. UNONEUVE
117.80 - 90.16 W (6-3)
Chu Fast Chu Furious
128.28 - 59.06 W (8-1)

Dog weimaraner dinner Dog weimaraner Christmas sweater mid-sneeze



Storypost | 2020.11.01

Del Mar San Diego surfing nightsurfing surf stealth mission
Night surfing

Derrick put together a crew for a stealth mission over in Del Mar. I ordered a ball mount dive light from Amazon to help address the challenge of focusing at night. Frustratingly, I ran into the same issue that I experienced with some cables for my media room buildout, they simply sent me the wrong item and right label.

Ikelite underwater housing Nikon D700 substrobe flashlight surf photography equipment

So I went to a local dive store and picked up a Light and Motion GoBe 500. It even came with an interchangeable red bulb to allow for a light hue that's both good for focus and easy on the eyes. As usual, I semi-snooted the flash to prevent it from blowing out the water directly in front of me.

Del Mar San Diego surfing nightsurfing surf stealth mission

The waves were pretty small, but it was clean and I could stand in waist-deep water for most of it.

Del Mar San Diego surfing nightsurfing surf stealth mission focus

Focus was still a challenge, but this setup is the best so far. The biggest difference may have been switching from single to continuous focus to minimize focus time for steadily-approaching objects. Glowsticks were still an important way to find my subjects in the viewfinder; I think live view would make a huge difference.

Del Mar San Diego surfing nightsurfing surf stealth mission

Underestimating the DS-51's power, I ran 500 ISO and had to dial down from TTL to manual -2 on the spot. Unable to easily change ISO in the darkness, I mostly shot at 1/160 or 1/200 but briefly used 1/10 and got some of that red glow. Dragging the shutter might have helped make these shots pop a bit more.

Del Mar San Diego surfing nightsurfing surf stealth mission

As usual, distance makes all the difference in the world for framing and flash power. I think this setup is ready for slave flash setup. Where's Jon?

Underwater camera housing night red dive light Night surfing flash inside wave Night surfing flash foam Night surfing flash ride Night surfing flash ride
Night surfing flash popup Night surfing flash standing Rusty board Night surfing crew

I finally put together an small portfolio of surf shots.

Chris
Chris
Missed cyclists and scuba divers Derrick
Derrick
Also swimmers and casual beach goers
Most well wishers. And anyone who came from over a mile away
Fishermen
And those skim boarders at Bird Rock
Halloween

Inflatable skeleton tyrannosaurus rex skellyrex

Halloween was more or less canceled this year, but since there was a crazy pirate ship setup nearby Jes and I threw together a janky Team Zissou costume (sans glock) and took a lap of the neighborhood.

Halloween Life Aquatic costumes campari Halloween pirate ship house Halloween pirate ship house
Decision 2020/pandemic

2020 election ballot Kanye West

Election day is nigh.

News Janice McGeachin covid denial

And we're setting record infection rates across the country. I've suddenly regretted ever patronizing The Celt. At least we got their Giant Guiness.
The other pandemic

Pandemic Legacy Season 2 character selection map

Ted and Chrissy stopped in Irvine on their road trip, so Jes and I went up to continue our Pandemic Legacy S2 campaign. We squeaked out a win - which I guess is the only kind of win in Pandemic.

California wildfires burned yard hill

Ted did story time about his run-in with brushfires. The very next day...

Silverado fire Irvine

... Silverado fire in Irvine. Very sus.

Back on the subject of Pandemic Legacy, season zero is on its way!
Volatility

Reddit WallStreetBets Dr Strangelove Wargames meme

I got back into volatility when potus contracted covid. It didn't go too crazy, so I decided to hold through the election, selling a few covered calls along the way.


Last week saw a pretty big jump with the resurgence of the 'rona, but I'm definitely willing to roll the dice that next week and month will have more ups and downs. I'm heavy in to bonds anyway, this is just for funsies.

GEX DIX SqueezeMetrics dark pool

To add to my compendium of basic options trading: GEX and DIX. These are (I think) proprietary, unofficial indices of 'dark pool' money which I understand to be one way that people with deep pockets make secret illuminati trades in order to not moon or drill to price of a security mid-trade. Or something.

Dark pools are private exchanges for trading securities that are not accessible by the investing public. Also known as "dark pools of liquidity," the name of these exchanges is a reference to their complete lack of transparency. Dark pools came about primarily to facilitate block trading by institutional investors who did not wish to impact the markets with their large orders and obtain adverse prices for their trades.

Source.
Football

Aaron Rodgers sack Vikings
Somehow Cincy decided it could beat the Titans and ended my survivor bid with a mere three opponents remaining.

Week d'san andreas da bears
- Medieval Gridiron -
Covid-20
- Password is Taco -
Dominicas
- Siren -
1 Danville Isotopes
110.8 - 72.5 W (1-0)
Black Cat Cowboys
155.66 - 78.36 W (1-0)
TeamNeverSkipLegDay
136.24 - 107.50 W (1-0)
2 Screaming Goat Battering Rams
119.9 - 105.9 W (2-0)
[Random UTF characters resembling an EQ]
115.50 - 115.74 L (1-1)
Dem' Arby's Boyz
94.28 - 102.02 L (1-1)
3 Nogales Chicle
106.5 - 117.8 L (2-1)
Circle the Wagons
100.42 - 90.02 W (2-1)
JoeExotic'sPrisonOil
127.90 - 69.70 W (2-1)
4 Britons Longbowmen
122.9 - 105.1 W (3-1)
Staying at Mahomes
123.28 - 72.90 W (3-1)
Daaaaaaaang
138.10 - 108.00 W (3-1)
5 Toronto Tanto
105.0 - 108.2 L (3-2)
Robocop's Posse
111.32 - 134.26 L (3-2)
Alpha Males
86.20 - 76.12 W (4-1)
6 Only Those Who Stand
108.2 - 66.7 W (4-2)
KickAssGreenNinja
65.10 - 84.02 L (3-3)
SlideCode #Jab
71.60 - 53.32 W (5-1)
7 San Francisco Seduction
121.7 - 126.4 L (4-3)
Ma ma ma my Corona
118.22 - 84.20 W (4-3)
G's Unit
109.20 - 92.46 W (6-1)
8 LA Boiling Hot Tar
116.2 - 59.4 W* (5-3)
Kamaravirus
118.24 - 109.94 W* (5-3)
WeaponX
106.14 - 79.80 W* (7-1)
Misc

Photo lighting plants

Rob bought some camera gear.

Dog weimaraner TV remote Samsung lazy