# Computation Layers¶

class PoolingLayer

2D pooling over the 2 image dimensions (width and height).

kernel

Default (1,1), a 2-tuple of integers specifying pooling kernel width and height, respectively.

stride

Default (1,1), a 2-tuple of integers specifying pooling stride in the width and height dimensions respectively.

Default (0,0), a 2-tuple of integers specifying the padding in the width and height dimensions respectively. Paddings are two-sided, so a pad of (1,0) will pad one pixel in both the left and the right boundary of an image.

pooling

Default Pooling.Max(). Specify the pooling operation to use.

tops
bottoms

Blob names for output and input.

class LRNLayer

Local Response Normalization Layer. It performs normalization over local input regions via the following mapping

$x \rightarrow y = \frac{x}{\left( \beta + (\alpha/n)\sum_{x_j\in N(x)}x_j^2 \right)^p}$

Here $$\beta$$ is the shift, $$\alpha$$ is the scale, $$p$$ is the power, and $$n$$ is the size of the local neighborhood. $$N(x)$$ denotes the local neighborhood of $$x$$ of size $$n$$ (including $$x$$ itself). There are two types of local neighborhood:

• LRNMode.AcrossChannel(): The local neighborhood is a region of shape (1, 1, $$k$$, 1) centered at $$x$$. In other words, the region extends across nearby channels (with zero padding if needed), but has no spatial extent. Here $$k$$ is the kernel size, and $$n=k$$ in this case.
• LRNMode.WithinChannel(): The local neighborhood is a region of shape ($$k$$, $$k$$, 1, 1) centered at $$x$$. In other words, the region extends spatially (in both the width and the channel dimension), again with zero padding when needed. But it does not extend across different channels. In this case $$n=k^2$$.
kernel

Default 5, an integer indicating the kernel size. See $$k$$ in the descriptions above.

scale

Default 1.

shift

Default 1 (yes, 1, not 0).

power

Default 0.75.

mode

Default LRNMode.AcrossChannel().

tops
bottoms

Names for output and input blobs. Only one input and one output blob are allowed.

class ElementWiseLayer

Element-wise layer implements basic element-wise operations on inputs.

operation

Element-wise operation. Built-in operations are in module ElementWiseFunctors, including Add, Subtract, Multiply and Divide.

tops

Output blob names, only one output blob is allowed.

bottoms

Input blob names, count must match the number of inputs operation takes.

class PowerLayer

Power layer performs element-wise operations as

$y = (ax + b)^p$

where $$a$$ is scale, $$b$$ is shift, and $$p$$ is power. During back propagation, the following element-wise derivatives are computed:

$\frac{\partial y}{\partial x} = pa(ax + b)^{p-1}$

Power layer is implemented separately instead of as an Element-wise layer for better performance because there are some many special cases of Power layer that could be computed more efficiently.

power

Default 1

scale

Default 1

shift

Default 0

tops
bottoms

Blob names for output and input.

class SplitLayer

Split layer produces identical copies [1] of the input. The number of copies is determined by the length of the tops property. During back propagation, derivatives from all the output copies are added together and propagated down.

This layer is typically used as a helper to implement some more complicated layers.

bottoms

Input blob names, only one input blob is allowed.

tops

Output blob names, should be more than one output blobs.

 [1] All the data is shared, so there is no actually data copying.
class ChannelPoolingLayer

1D pooling over the channel dimension.

kernel

Default 1, pooling kernel size.

stride

Default 1, stride for pooling.