Cannot concatenate Keras Lambda layers











up vote
1
down vote

favorite












I need to process some layers in a different way, doing some OR operations. I've found how to do it, I create a Lambda Layer and process the data with keras.backend.any. I am also doing a split, because I need to operate 2 separates groups with my logical OR.



def logical_or_layer(x):
"""Processing an OR operation"""
import keras.backend
#normalized to 0,1
aux_array = keras.backend.sign(x)
aux_array = keras.backend.relu(aux_array)
# OR operation
aux_array = keras.backend.any(aux_array)
# casting back the True/False to 1,0
aux_array = keras.backend.cast(aux_array, dtype='float32')

return aux_array


Then I'am creating my layers like this:



#this is the input tensor
inputs = Input(shape=(inputSize,))

#this is the Neurule layer
x = Dense(neurulesQt, activation='softsign')(inputs)
#after each neurule layer, the outputs need to be put into SIGNUM (-1 or 1)
x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)

#separating into 2 (2 possible outputs)
layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=(11, ), name='layer_split0')(x)
layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=(9,), name='layer_split1')(x)

#this is the OR layer
y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)


Just FYI: Neurules are Neurons created based on IF-THEN rules, that's one project to work with Neurons which were trained with a TruthTable, representing Expert Knowledge.



Now, when I try to put the splitted layers back like this:



y = concatenate([y_0,y_1])


This error comes:



ValueError: Can't concatenate scalars (use tf.stack instead) for 'concatenate_32/concat' (op: 'ConcatV2') with input shapes: , , .


Then ok, let's use the tf.stack as suggested:



y = keras.backend.stack([y_0, y_1])


Then it can't be used as an output in the Model anymore, when I try:



model = Model(inputs=inputs, outputs=y)


Comes the error:



ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("stack_14:0", shape=(2,), dtype=float32)


Checking with the function keras.backend.is_keras_tensor(y) it gives me False, but with all the other layers it gives me True



How should I concatenate it correctly?



EDIT: Following the answer from @today I was able to create a new Lambda Layer with the stack wrapped inside of it. But the outputs are modified, it should be (None,2) and it is (2,None,1) here is the output from model.summary():



__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_90 (InputLayer) (None, 24) 0
__________________________________________________________________________________________________
dense_90 (Dense) (None, 20) 500 input_90[0][0]
__________________________________________________________________________________________________
signumAfterNeurules (Lambda) (None, 20) 0 dense_90[0][0]
__________________________________________________________________________________________________
layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
or0 (Lambda) (None, 1) 0 layer_split0[0][0]
__________________________________________________________________________________________________
or1 (Lambda) (None, 1) 0 layer_split1[0][0]
__________________________________________________________________________________________________
output (Lambda) (2, None, 1) 0 or0[0][0]
or1[0][0]
==================================================================================================
Total params: 500
Trainable params: 0
Non-trainable params: 500
__________________________________________________________________________________________________


How should I define the output_shape in the layers to have the batch still there at the end?



EDIT2: Following the tips from @today I've done the following:



#this is the input tensor
inputs = Input(shape=(inputSize,))

#this is the Neurule layer
x = Dense(neurulesQt, activation='softsign')(inputs)
#after each neuron layer, the outputs need to be put into SIGNUM (-1 or 1)
x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)
#separating into 2 (2 possible outputs)
layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=[11], name='layer_split0')(x)
layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=[9], name='layer_split1')(x)
#this is the OR layer
y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)

y = Lambda(lambda x: K.stack([x[0], x[1]]),output_shape=(2,), name="output")([y_0, y_1])


Now it seems to work correctly, the model.summary() below:



__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 24) 0
__________________________________________________________________________________________________
dense_1 (Dense) (None, 20) 500 input_1[0][0]
__________________________________________________________________________________________________
signumAfterNeurules (Lambda) (None, 20) 0 dense_1[0][0]
__________________________________________________________________________________________________
layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
or0 (Lambda) (None, 1) 0 layer_split0[0][0]
__________________________________________________________________________________________________
or1 (Lambda) (None, 1) 0 layer_split1[0][0]
__________________________________________________________________________________________________
output (Lambda) (None, 2) 0 or0[0][0]
or1[0][0]
==================================================================================================
Total params: 500
Trainable params: 0
Non-trainable params: 500
__________________________________________________________________________________________________









share|improve this question




























    up vote
    1
    down vote

    favorite












    I need to process some layers in a different way, doing some OR operations. I've found how to do it, I create a Lambda Layer and process the data with keras.backend.any. I am also doing a split, because I need to operate 2 separates groups with my logical OR.



    def logical_or_layer(x):
    """Processing an OR operation"""
    import keras.backend
    #normalized to 0,1
    aux_array = keras.backend.sign(x)
    aux_array = keras.backend.relu(aux_array)
    # OR operation
    aux_array = keras.backend.any(aux_array)
    # casting back the True/False to 1,0
    aux_array = keras.backend.cast(aux_array, dtype='float32')

    return aux_array


    Then I'am creating my layers like this:



    #this is the input tensor
    inputs = Input(shape=(inputSize,))

    #this is the Neurule layer
    x = Dense(neurulesQt, activation='softsign')(inputs)
    #after each neurule layer, the outputs need to be put into SIGNUM (-1 or 1)
    x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)

    #separating into 2 (2 possible outputs)
    layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=(11, ), name='layer_split0')(x)
    layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=(9,), name='layer_split1')(x)

    #this is the OR layer
    y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
    y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)


    Just FYI: Neurules are Neurons created based on IF-THEN rules, that's one project to work with Neurons which were trained with a TruthTable, representing Expert Knowledge.



    Now, when I try to put the splitted layers back like this:



    y = concatenate([y_0,y_1])


    This error comes:



    ValueError: Can't concatenate scalars (use tf.stack instead) for 'concatenate_32/concat' (op: 'ConcatV2') with input shapes: , , .


    Then ok, let's use the tf.stack as suggested:



    y = keras.backend.stack([y_0, y_1])


    Then it can't be used as an output in the Model anymore, when I try:



    model = Model(inputs=inputs, outputs=y)


    Comes the error:



    ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("stack_14:0", shape=(2,), dtype=float32)


    Checking with the function keras.backend.is_keras_tensor(y) it gives me False, but with all the other layers it gives me True



    How should I concatenate it correctly?



    EDIT: Following the answer from @today I was able to create a new Lambda Layer with the stack wrapped inside of it. But the outputs are modified, it should be (None,2) and it is (2,None,1) here is the output from model.summary():



    __________________________________________________________________________________________________
    Layer (type) Output Shape Param # Connected to
    ==================================================================================================
    input_90 (InputLayer) (None, 24) 0
    __________________________________________________________________________________________________
    dense_90 (Dense) (None, 20) 500 input_90[0][0]
    __________________________________________________________________________________________________
    signumAfterNeurules (Lambda) (None, 20) 0 dense_90[0][0]
    __________________________________________________________________________________________________
    layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
    __________________________________________________________________________________________________
    layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
    __________________________________________________________________________________________________
    or0 (Lambda) (None, 1) 0 layer_split0[0][0]
    __________________________________________________________________________________________________
    or1 (Lambda) (None, 1) 0 layer_split1[0][0]
    __________________________________________________________________________________________________
    output (Lambda) (2, None, 1) 0 or0[0][0]
    or1[0][0]
    ==================================================================================================
    Total params: 500
    Trainable params: 0
    Non-trainable params: 500
    __________________________________________________________________________________________________


    How should I define the output_shape in the layers to have the batch still there at the end?



    EDIT2: Following the tips from @today I've done the following:



    #this is the input tensor
    inputs = Input(shape=(inputSize,))

    #this is the Neurule layer
    x = Dense(neurulesQt, activation='softsign')(inputs)
    #after each neuron layer, the outputs need to be put into SIGNUM (-1 or 1)
    x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)
    #separating into 2 (2 possible outputs)
    layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=[11], name='layer_split0')(x)
    layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=[9], name='layer_split1')(x)
    #this is the OR layer
    y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
    y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)

    y = Lambda(lambda x: K.stack([x[0], x[1]]),output_shape=(2,), name="output")([y_0, y_1])


    Now it seems to work correctly, the model.summary() below:



    __________________________________________________________________________________________________
    Layer (type) Output Shape Param # Connected to
    ==================================================================================================
    input_1 (InputLayer) (None, 24) 0
    __________________________________________________________________________________________________
    dense_1 (Dense) (None, 20) 500 input_1[0][0]
    __________________________________________________________________________________________________
    signumAfterNeurules (Lambda) (None, 20) 0 dense_1[0][0]
    __________________________________________________________________________________________________
    layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
    __________________________________________________________________________________________________
    layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
    __________________________________________________________________________________________________
    or0 (Lambda) (None, 1) 0 layer_split0[0][0]
    __________________________________________________________________________________________________
    or1 (Lambda) (None, 1) 0 layer_split1[0][0]
    __________________________________________________________________________________________________
    output (Lambda) (None, 2) 0 or0[0][0]
    or1[0][0]
    ==================================================================================================
    Total params: 500
    Trainable params: 0
    Non-trainable params: 500
    __________________________________________________________________________________________________









    share|improve this question


























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I need to process some layers in a different way, doing some OR operations. I've found how to do it, I create a Lambda Layer and process the data with keras.backend.any. I am also doing a split, because I need to operate 2 separates groups with my logical OR.



      def logical_or_layer(x):
      """Processing an OR operation"""
      import keras.backend
      #normalized to 0,1
      aux_array = keras.backend.sign(x)
      aux_array = keras.backend.relu(aux_array)
      # OR operation
      aux_array = keras.backend.any(aux_array)
      # casting back the True/False to 1,0
      aux_array = keras.backend.cast(aux_array, dtype='float32')

      return aux_array


      Then I'am creating my layers like this:



      #this is the input tensor
      inputs = Input(shape=(inputSize,))

      #this is the Neurule layer
      x = Dense(neurulesQt, activation='softsign')(inputs)
      #after each neurule layer, the outputs need to be put into SIGNUM (-1 or 1)
      x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)

      #separating into 2 (2 possible outputs)
      layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=(11, ), name='layer_split0')(x)
      layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=(9,), name='layer_split1')(x)

      #this is the OR layer
      y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
      y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)


      Just FYI: Neurules are Neurons created based on IF-THEN rules, that's one project to work with Neurons which were trained with a TruthTable, representing Expert Knowledge.



      Now, when I try to put the splitted layers back like this:



      y = concatenate([y_0,y_1])


      This error comes:



      ValueError: Can't concatenate scalars (use tf.stack instead) for 'concatenate_32/concat' (op: 'ConcatV2') with input shapes: , , .


      Then ok, let's use the tf.stack as suggested:



      y = keras.backend.stack([y_0, y_1])


      Then it can't be used as an output in the Model anymore, when I try:



      model = Model(inputs=inputs, outputs=y)


      Comes the error:



      ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("stack_14:0", shape=(2,), dtype=float32)


      Checking with the function keras.backend.is_keras_tensor(y) it gives me False, but with all the other layers it gives me True



      How should I concatenate it correctly?



      EDIT: Following the answer from @today I was able to create a new Lambda Layer with the stack wrapped inside of it. But the outputs are modified, it should be (None,2) and it is (2,None,1) here is the output from model.summary():



      __________________________________________________________________________________________________
      Layer (type) Output Shape Param # Connected to
      ==================================================================================================
      input_90 (InputLayer) (None, 24) 0
      __________________________________________________________________________________________________
      dense_90 (Dense) (None, 20) 500 input_90[0][0]
      __________________________________________________________________________________________________
      signumAfterNeurules (Lambda) (None, 20) 0 dense_90[0][0]
      __________________________________________________________________________________________________
      layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
      __________________________________________________________________________________________________
      layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
      __________________________________________________________________________________________________
      or0 (Lambda) (None, 1) 0 layer_split0[0][0]
      __________________________________________________________________________________________________
      or1 (Lambda) (None, 1) 0 layer_split1[0][0]
      __________________________________________________________________________________________________
      output (Lambda) (2, None, 1) 0 or0[0][0]
      or1[0][0]
      ==================================================================================================
      Total params: 500
      Trainable params: 0
      Non-trainable params: 500
      __________________________________________________________________________________________________


      How should I define the output_shape in the layers to have the batch still there at the end?



      EDIT2: Following the tips from @today I've done the following:



      #this is the input tensor
      inputs = Input(shape=(inputSize,))

      #this is the Neurule layer
      x = Dense(neurulesQt, activation='softsign')(inputs)
      #after each neuron layer, the outputs need to be put into SIGNUM (-1 or 1)
      x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)
      #separating into 2 (2 possible outputs)
      layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=[11], name='layer_split0')(x)
      layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=[9], name='layer_split1')(x)
      #this is the OR layer
      y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
      y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)

      y = Lambda(lambda x: K.stack([x[0], x[1]]),output_shape=(2,), name="output")([y_0, y_1])


      Now it seems to work correctly, the model.summary() below:



      __________________________________________________________________________________________________
      Layer (type) Output Shape Param # Connected to
      ==================================================================================================
      input_1 (InputLayer) (None, 24) 0
      __________________________________________________________________________________________________
      dense_1 (Dense) (None, 20) 500 input_1[0][0]
      __________________________________________________________________________________________________
      signumAfterNeurules (Lambda) (None, 20) 0 dense_1[0][0]
      __________________________________________________________________________________________________
      layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
      __________________________________________________________________________________________________
      layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
      __________________________________________________________________________________________________
      or0 (Lambda) (None, 1) 0 layer_split0[0][0]
      __________________________________________________________________________________________________
      or1 (Lambda) (None, 1) 0 layer_split1[0][0]
      __________________________________________________________________________________________________
      output (Lambda) (None, 2) 0 or0[0][0]
      or1[0][0]
      ==================================================================================================
      Total params: 500
      Trainable params: 0
      Non-trainable params: 500
      __________________________________________________________________________________________________









      share|improve this question















      I need to process some layers in a different way, doing some OR operations. I've found how to do it, I create a Lambda Layer and process the data with keras.backend.any. I am also doing a split, because I need to operate 2 separates groups with my logical OR.



      def logical_or_layer(x):
      """Processing an OR operation"""
      import keras.backend
      #normalized to 0,1
      aux_array = keras.backend.sign(x)
      aux_array = keras.backend.relu(aux_array)
      # OR operation
      aux_array = keras.backend.any(aux_array)
      # casting back the True/False to 1,0
      aux_array = keras.backend.cast(aux_array, dtype='float32')

      return aux_array


      Then I'am creating my layers like this:



      #this is the input tensor
      inputs = Input(shape=(inputSize,))

      #this is the Neurule layer
      x = Dense(neurulesQt, activation='softsign')(inputs)
      #after each neurule layer, the outputs need to be put into SIGNUM (-1 or 1)
      x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)

      #separating into 2 (2 possible outputs)
      layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=(11, ), name='layer_split0')(x)
      layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=(9,), name='layer_split1')(x)

      #this is the OR layer
      y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
      y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)


      Just FYI: Neurules are Neurons created based on IF-THEN rules, that's one project to work with Neurons which were trained with a TruthTable, representing Expert Knowledge.



      Now, when I try to put the splitted layers back like this:



      y = concatenate([y_0,y_1])


      This error comes:



      ValueError: Can't concatenate scalars (use tf.stack instead) for 'concatenate_32/concat' (op: 'ConcatV2') with input shapes: , , .


      Then ok, let's use the tf.stack as suggested:



      y = keras.backend.stack([y_0, y_1])


      Then it can't be used as an output in the Model anymore, when I try:



      model = Model(inputs=inputs, outputs=y)


      Comes the error:



      ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("stack_14:0", shape=(2,), dtype=float32)


      Checking with the function keras.backend.is_keras_tensor(y) it gives me False, but with all the other layers it gives me True



      How should I concatenate it correctly?



      EDIT: Following the answer from @today I was able to create a new Lambda Layer with the stack wrapped inside of it. But the outputs are modified, it should be (None,2) and it is (2,None,1) here is the output from model.summary():



      __________________________________________________________________________________________________
      Layer (type) Output Shape Param # Connected to
      ==================================================================================================
      input_90 (InputLayer) (None, 24) 0
      __________________________________________________________________________________________________
      dense_90 (Dense) (None, 20) 500 input_90[0][0]
      __________________________________________________________________________________________________
      signumAfterNeurules (Lambda) (None, 20) 0 dense_90[0][0]
      __________________________________________________________________________________________________
      layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
      __________________________________________________________________________________________________
      layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
      __________________________________________________________________________________________________
      or0 (Lambda) (None, 1) 0 layer_split0[0][0]
      __________________________________________________________________________________________________
      or1 (Lambda) (None, 1) 0 layer_split1[0][0]
      __________________________________________________________________________________________________
      output (Lambda) (2, None, 1) 0 or0[0][0]
      or1[0][0]
      ==================================================================================================
      Total params: 500
      Trainable params: 0
      Non-trainable params: 500
      __________________________________________________________________________________________________


      How should I define the output_shape in the layers to have the batch still there at the end?



      EDIT2: Following the tips from @today I've done the following:



      #this is the input tensor
      inputs = Input(shape=(inputSize,))

      #this is the Neurule layer
      x = Dense(neurulesQt, activation='softsign')(inputs)
      #after each neuron layer, the outputs need to be put into SIGNUM (-1 or 1)
      x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)
      #separating into 2 (2 possible outputs)
      layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=[11], name='layer_split0')(x)
      layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=[9], name='layer_split1')(x)
      #this is the OR layer
      y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
      y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)

      y = Lambda(lambda x: K.stack([x[0], x[1]]),output_shape=(2,), name="output")([y_0, y_1])


      Now it seems to work correctly, the model.summary() below:



      __________________________________________________________________________________________________
      Layer (type) Output Shape Param # Connected to
      ==================================================================================================
      input_1 (InputLayer) (None, 24) 0
      __________________________________________________________________________________________________
      dense_1 (Dense) (None, 20) 500 input_1[0][0]
      __________________________________________________________________________________________________
      signumAfterNeurules (Lambda) (None, 20) 0 dense_1[0][0]
      __________________________________________________________________________________________________
      layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
      __________________________________________________________________________________________________
      layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
      __________________________________________________________________________________________________
      or0 (Lambda) (None, 1) 0 layer_split0[0][0]
      __________________________________________________________________________________________________
      or1 (Lambda) (None, 1) 0 layer_split1[0][0]
      __________________________________________________________________________________________________
      output (Lambda) (None, 2) 0 or0[0][0]
      or1[0][0]
      ==================================================================================================
      Total params: 500
      Trainable params: 0
      Non-trainable params: 500
      __________________________________________________________________________________________________






      python tensorflow keras






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 20 at 9:10

























      asked Nov 19 at 14:42









      Vinicius

      158




      158
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          Wrap the K.stack inside a Lambda layer like this:



          from keras import backend as K

          y = Lambda(lambda x: K.stack([x[0], x[1]]))([y_0, y_1])





          share|improve this answer





















          • Thank you! I was trying to wrap the stack inside the Lambdas already created and nothing happened...but as you said, I need to create one more layer and do it inside it, now it's working.
            – Vinicius
            Nov 19 at 14:58










          • The problem now is like I'm having a output_shape=(2,) and it should be (None,2), but I can't add it as a parameter, it gives me an error IndexError: tuple index out of range
            – Vinicius
            Nov 19 at 15:07










          • @Vinicius It is caused by other parts of the model, probably your custom layers. It seems you are not preserving the batch axis.
            – today
            Nov 19 at 17:46










          • I've tried to modify the output_shapeswith no success, I've edited the question with the model.summary() output, do you know what should I change now?
            – Vinicius
            Nov 20 at 7:31












          • @Vinicius I don't think you need to use K.stack at all. The output of your layers should preserve the batch axis. Change output_shape argument of or0 and or1 layers to (None, 1) and then use concatenate layer instead. If you would like to use K.stack you can set its axis argument to -1, however I recommend the concatenate layer.
            – today
            Nov 20 at 7:44













          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53376996%2fcannot-concatenate-keras-lambda-layers%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote



          accepted










          Wrap the K.stack inside a Lambda layer like this:



          from keras import backend as K

          y = Lambda(lambda x: K.stack([x[0], x[1]]))([y_0, y_1])





          share|improve this answer





















          • Thank you! I was trying to wrap the stack inside the Lambdas already created and nothing happened...but as you said, I need to create one more layer and do it inside it, now it's working.
            – Vinicius
            Nov 19 at 14:58










          • The problem now is like I'm having a output_shape=(2,) and it should be (None,2), but I can't add it as a parameter, it gives me an error IndexError: tuple index out of range
            – Vinicius
            Nov 19 at 15:07










          • @Vinicius It is caused by other parts of the model, probably your custom layers. It seems you are not preserving the batch axis.
            – today
            Nov 19 at 17:46










          • I've tried to modify the output_shapeswith no success, I've edited the question with the model.summary() output, do you know what should I change now?
            – Vinicius
            Nov 20 at 7:31












          • @Vinicius I don't think you need to use K.stack at all. The output of your layers should preserve the batch axis. Change output_shape argument of or0 and or1 layers to (None, 1) and then use concatenate layer instead. If you would like to use K.stack you can set its axis argument to -1, however I recommend the concatenate layer.
            – today
            Nov 20 at 7:44

















          up vote
          1
          down vote



          accepted










          Wrap the K.stack inside a Lambda layer like this:



          from keras import backend as K

          y = Lambda(lambda x: K.stack([x[0], x[1]]))([y_0, y_1])





          share|improve this answer





















          • Thank you! I was trying to wrap the stack inside the Lambdas already created and nothing happened...but as you said, I need to create one more layer and do it inside it, now it's working.
            – Vinicius
            Nov 19 at 14:58










          • The problem now is like I'm having a output_shape=(2,) and it should be (None,2), but I can't add it as a parameter, it gives me an error IndexError: tuple index out of range
            – Vinicius
            Nov 19 at 15:07










          • @Vinicius It is caused by other parts of the model, probably your custom layers. It seems you are not preserving the batch axis.
            – today
            Nov 19 at 17:46










          • I've tried to modify the output_shapeswith no success, I've edited the question with the model.summary() output, do you know what should I change now?
            – Vinicius
            Nov 20 at 7:31












          • @Vinicius I don't think you need to use K.stack at all. The output of your layers should preserve the batch axis. Change output_shape argument of or0 and or1 layers to (None, 1) and then use concatenate layer instead. If you would like to use K.stack you can set its axis argument to -1, however I recommend the concatenate layer.
            – today
            Nov 20 at 7:44















          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          Wrap the K.stack inside a Lambda layer like this:



          from keras import backend as K

          y = Lambda(lambda x: K.stack([x[0], x[1]]))([y_0, y_1])





          share|improve this answer












          Wrap the K.stack inside a Lambda layer like this:



          from keras import backend as K

          y = Lambda(lambda x: K.stack([x[0], x[1]]))([y_0, y_1])






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 19 at 14:48









          today

          8,42621434




          8,42621434












          • Thank you! I was trying to wrap the stack inside the Lambdas already created and nothing happened...but as you said, I need to create one more layer and do it inside it, now it's working.
            – Vinicius
            Nov 19 at 14:58










          • The problem now is like I'm having a output_shape=(2,) and it should be (None,2), but I can't add it as a parameter, it gives me an error IndexError: tuple index out of range
            – Vinicius
            Nov 19 at 15:07










          • @Vinicius It is caused by other parts of the model, probably your custom layers. It seems you are not preserving the batch axis.
            – today
            Nov 19 at 17:46










          • I've tried to modify the output_shapeswith no success, I've edited the question with the model.summary() output, do you know what should I change now?
            – Vinicius
            Nov 20 at 7:31












          • @Vinicius I don't think you need to use K.stack at all. The output of your layers should preserve the batch axis. Change output_shape argument of or0 and or1 layers to (None, 1) and then use concatenate layer instead. If you would like to use K.stack you can set its axis argument to -1, however I recommend the concatenate layer.
            – today
            Nov 20 at 7:44




















          • Thank you! I was trying to wrap the stack inside the Lambdas already created and nothing happened...but as you said, I need to create one more layer and do it inside it, now it's working.
            – Vinicius
            Nov 19 at 14:58










          • The problem now is like I'm having a output_shape=(2,) and it should be (None,2), but I can't add it as a parameter, it gives me an error IndexError: tuple index out of range
            – Vinicius
            Nov 19 at 15:07










          • @Vinicius It is caused by other parts of the model, probably your custom layers. It seems you are not preserving the batch axis.
            – today
            Nov 19 at 17:46










          • I've tried to modify the output_shapeswith no success, I've edited the question with the model.summary() output, do you know what should I change now?
            – Vinicius
            Nov 20 at 7:31












          • @Vinicius I don't think you need to use K.stack at all. The output of your layers should preserve the batch axis. Change output_shape argument of or0 and or1 layers to (None, 1) and then use concatenate layer instead. If you would like to use K.stack you can set its axis argument to -1, however I recommend the concatenate layer.
            – today
            Nov 20 at 7:44


















          Thank you! I was trying to wrap the stack inside the Lambdas already created and nothing happened...but as you said, I need to create one more layer and do it inside it, now it's working.
          – Vinicius
          Nov 19 at 14:58




          Thank you! I was trying to wrap the stack inside the Lambdas already created and nothing happened...but as you said, I need to create one more layer and do it inside it, now it's working.
          – Vinicius
          Nov 19 at 14:58












          The problem now is like I'm having a output_shape=(2,) and it should be (None,2), but I can't add it as a parameter, it gives me an error IndexError: tuple index out of range
          – Vinicius
          Nov 19 at 15:07




          The problem now is like I'm having a output_shape=(2,) and it should be (None,2), but I can't add it as a parameter, it gives me an error IndexError: tuple index out of range
          – Vinicius
          Nov 19 at 15:07












          @Vinicius It is caused by other parts of the model, probably your custom layers. It seems you are not preserving the batch axis.
          – today
          Nov 19 at 17:46




          @Vinicius It is caused by other parts of the model, probably your custom layers. It seems you are not preserving the batch axis.
          – today
          Nov 19 at 17:46












          I've tried to modify the output_shapeswith no success, I've edited the question with the model.summary() output, do you know what should I change now?
          – Vinicius
          Nov 20 at 7:31






          I've tried to modify the output_shapeswith no success, I've edited the question with the model.summary() output, do you know what should I change now?
          – Vinicius
          Nov 20 at 7:31














          @Vinicius I don't think you need to use K.stack at all. The output of your layers should preserve the batch axis. Change output_shape argument of or0 and or1 layers to (None, 1) and then use concatenate layer instead. If you would like to use K.stack you can set its axis argument to -1, however I recommend the concatenate layer.
          – today
          Nov 20 at 7:44






          @Vinicius I don't think you need to use K.stack at all. The output of your layers should preserve the batch axis. Change output_shape argument of or0 and or1 layers to (None, 1) and then use concatenate layer instead. If you would like to use K.stack you can set its axis argument to -1, however I recommend the concatenate layer.
          – today
          Nov 20 at 7:44




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53376996%2fcannot-concatenate-keras-lambda-layers%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

          Alcedinidae

          RAC Tourist Trophy