一尘不染

在keras自定义层中进行广播的逐元素乘法

python

我正在创建一个自定义图层,其权重需要在激活之前乘以逐个元素。当输出和输入的形状相同时,我可以使它工作。当我将一阶数组作为输入,将二阶数组作为输出时,会发生问题。tensorflow.multiply支持广播,但是当我尝试在Layer.call(x,self.kernel)中使用它来将x与self.kernel变量相乘时,它抱怨它们是不同的形状,说:

ValueError: Dimensions must be equal, but are 4 and 3 for 'my_layer_1/Mul' (op: 'Mul') with input shapes: [?,4], [4,3].

这是我的代码:

from keras import backend as K
from keras.engine.topology import Layer
import tensorflow as tf
from keras.models import Sequential
import numpy as np

class MyLayer(Layer):

    def __init__(self, output_dims, **kwargs):
        self.output_dims = output_dims

        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        # Create a trainable weight variable for this layer.
        self.kernel = self.add_weight(name='kernel',
                                      shape=self.output_dims,
                                      initializer='ones',
                                      trainable=True)


        super(MyLayer, self).build(input_shape)  # Be sure to call this somewhere!

    def call(self, x):
        #multiply wont work here?
        return K.tf.multiply(x, self.kernel)

    def compute_output_shape(self, input_shape):
        return (self.output_dims)

mInput = np.array([[1,2,3,4]])
inShape = (4,)
net = Sequential()
outShape = (4,3)
l1 = MyLayer(outShape, input_shape= inShape)
net.add(l1)
net.compile(loss='mean_absolute_error', optimizer='adam', metrics=['accuracy'])
p = net.predict(x=mInput, batch_size=1)
print(p)

编辑:给定输入形状(4,)和输出形状(4,3),权重矩阵应与输出形状相同,并用1进行初始化。因此,在上面的代码中,输入为[1,2,3,4],权重矩阵应为[[1,1,1,1],[1,1,1,1],[1,1,1
,1]],输出应类似于[[1,2,3,4],[1,2,3,4],[1,2,3,4]]


阅读 573

收藏
2021-01-20

共1个答案

一尘不染

乘法之前,您需要重复元素以增加形状。您可以使用K.repeat_elements它。(import keras.backend as K

class MyLayer(Layer):

    #there are some difficulties for different types of shapes   
    #let's use a 'repeat_count' instead, increasing only one dimension
    def __init__(self, repeat_count,**kwargs):
        self.repeat_count = repeat_count
        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):

        #first, let's get the output_shape
        output_shape = self.compute_output_shape(input_shape)
        weight_shape = (1,) + output_shape[1:] #replace the batch size by 1


        self.kernel = self.add_weight(name='kernel',
                                      shape=weight_shape,
                                      initializer='ones',
                                      trainable=True)


        super(MyLayer, self).build(input_shape)  # Be sure to call this somewhere!

    #here, we need to repeat the elements before multiplying
    def call(self, x):

        if self.repeat_count > 1:

             #we add the extra dimension:
             x = K.expand_dims(x, axis=1)

             #we replicate the elements
             x = K.repeat_elements(x, rep=self.repeat_count, axis=1)


        #multiply
        return x * self.kernel


    #make sure we comput the ouptut shape according to what we did in "call"
    def compute_output_shape(self, input_shape):

        if self.repeat_count > 1:
            return (input_shape[0],self.repeat_count) + input_shape[1:]
        else:
            return input_shape
2021-01-20