'NoneType' object has no attribute '_inbound_nodes' ? Keras seq2seq classification












0















I have a problem with creating a Keras model. I found a simple encoder decoder and try to fix it like below:



   # some encoder code ... .... above is not shown here, where it is too obvious
encoder_outputs, state_h, state_c = encoder(encoder_inputs)

encoder_states = [state_h, state_c]

decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(encoder_outputs[-1:], initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

model = Model(inputs=[encoder_inputs], outputs=decoder_outputs)


which will use the last encoder output as decoder input and only goes for a single output.



I wonder why it creates such a problem at:



model = Model(inputs=[encoder_inputs], outputs=decoder_outputs)


and creating message:




'NoneType' object has no attribute '_inbound_nodes'




How do I solve it? I tried to find similar questions' answers but I didn't get a good one to fix mine.










share|improve this question





























    0















    I have a problem with creating a Keras model. I found a simple encoder decoder and try to fix it like below:



       # some encoder code ... .... above is not shown here, where it is too obvious
    encoder_outputs, state_h, state_c = encoder(encoder_inputs)

    encoder_states = [state_h, state_c]

    decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
    decoder_outputs, _, _ = decoder_lstm(encoder_outputs[-1:], initial_state=encoder_states)
    decoder_dense = Dense(num_decoder_tokens, activation='softmax')
    decoder_outputs = decoder_dense(decoder_outputs)

    model = Model(inputs=[encoder_inputs], outputs=decoder_outputs)


    which will use the last encoder output as decoder input and only goes for a single output.



    I wonder why it creates such a problem at:



    model = Model(inputs=[encoder_inputs], outputs=decoder_outputs)


    and creating message:




    'NoneType' object has no attribute '_inbound_nodes'




    How do I solve it? I tried to find similar questions' answers but I didn't get a good one to fix mine.










    share|improve this question



























      0












      0








      0








      I have a problem with creating a Keras model. I found a simple encoder decoder and try to fix it like below:



         # some encoder code ... .... above is not shown here, where it is too obvious
      encoder_outputs, state_h, state_c = encoder(encoder_inputs)

      encoder_states = [state_h, state_c]

      decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
      decoder_outputs, _, _ = decoder_lstm(encoder_outputs[-1:], initial_state=encoder_states)
      decoder_dense = Dense(num_decoder_tokens, activation='softmax')
      decoder_outputs = decoder_dense(decoder_outputs)

      model = Model(inputs=[encoder_inputs], outputs=decoder_outputs)


      which will use the last encoder output as decoder input and only goes for a single output.



      I wonder why it creates such a problem at:



      model = Model(inputs=[encoder_inputs], outputs=decoder_outputs)


      and creating message:




      'NoneType' object has no attribute '_inbound_nodes'




      How do I solve it? I tried to find similar questions' answers but I didn't get a good one to fix mine.










      share|improve this question
















      I have a problem with creating a Keras model. I found a simple encoder decoder and try to fix it like below:



         # some encoder code ... .... above is not shown here, where it is too obvious
      encoder_outputs, state_h, state_c = encoder(encoder_inputs)

      encoder_states = [state_h, state_c]

      decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
      decoder_outputs, _, _ = decoder_lstm(encoder_outputs[-1:], initial_state=encoder_states)
      decoder_dense = Dense(num_decoder_tokens, activation='softmax')
      decoder_outputs = decoder_dense(decoder_outputs)

      model = Model(inputs=[encoder_inputs], outputs=decoder_outputs)


      which will use the last encoder output as decoder input and only goes for a single output.



      I wonder why it creates such a problem at:



      model = Model(inputs=[encoder_inputs], outputs=decoder_outputs)


      and creating message:




      'NoneType' object has no attribute '_inbound_nodes'




      How do I solve it? I tried to find similar questions' answers but I didn't get a good one to fix mine.







      python machine-learning keras classification lstm






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 23 '18 at 12:11









      today

      11.5k22239




      11.5k22239










      asked Nov 23 '18 at 9:35









      Isaac SimIsaac Sim

      10511




      10511
























          1 Answer
          1






          active

          oldest

          votes


















          0














          First of all, encoder_outputs[-1:] would give you the last batch, not the last output of each batch which is encoder_outputs[:,-1:].



          Second, since you need to pass Keras Tensors to layers in Keras, you need to use a Lambda layer to do the slicing:



          last_input = Lambda(lambda x: x[:,-1:])(encoder_inputs)
          decoder_outputs, _, _ = decoder_lstm(last_input,
          initial_state=encoder_states)





          share|improve this answer
























          • well, sorry to not inform you that the input type of decoder [does not include] batch size. Input shape is [sequence length x feature length]. So your guess that encoder_outputs[-1:] is the last batch is wrong. But anyways, Lambda seems working although I have to test a few things more. Once I am confident with it, I will come back and accept your answer. Thanks

            – Isaac Sim
            Nov 26 '18 at 0:45











          • @IsaacSim Oh, my dear! It is impossible that the batch axis is not present in the tensors. Either you have explicitly set the batch size using batch_input_shape or it is implicitly added by Keras itself.

            – today
            Nov 26 '18 at 9:13














          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53443996%2fnonetype-object-has-no-attribute-inbound-nodes-keras-seq2seq-classificati%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          First of all, encoder_outputs[-1:] would give you the last batch, not the last output of each batch which is encoder_outputs[:,-1:].



          Second, since you need to pass Keras Tensors to layers in Keras, you need to use a Lambda layer to do the slicing:



          last_input = Lambda(lambda x: x[:,-1:])(encoder_inputs)
          decoder_outputs, _, _ = decoder_lstm(last_input,
          initial_state=encoder_states)





          share|improve this answer
























          • well, sorry to not inform you that the input type of decoder [does not include] batch size. Input shape is [sequence length x feature length]. So your guess that encoder_outputs[-1:] is the last batch is wrong. But anyways, Lambda seems working although I have to test a few things more. Once I am confident with it, I will come back and accept your answer. Thanks

            – Isaac Sim
            Nov 26 '18 at 0:45











          • @IsaacSim Oh, my dear! It is impossible that the batch axis is not present in the tensors. Either you have explicitly set the batch size using batch_input_shape or it is implicitly added by Keras itself.

            – today
            Nov 26 '18 at 9:13


















          0














          First of all, encoder_outputs[-1:] would give you the last batch, not the last output of each batch which is encoder_outputs[:,-1:].



          Second, since you need to pass Keras Tensors to layers in Keras, you need to use a Lambda layer to do the slicing:



          last_input = Lambda(lambda x: x[:,-1:])(encoder_inputs)
          decoder_outputs, _, _ = decoder_lstm(last_input,
          initial_state=encoder_states)





          share|improve this answer
























          • well, sorry to not inform you that the input type of decoder [does not include] batch size. Input shape is [sequence length x feature length]. So your guess that encoder_outputs[-1:] is the last batch is wrong. But anyways, Lambda seems working although I have to test a few things more. Once I am confident with it, I will come back and accept your answer. Thanks

            – Isaac Sim
            Nov 26 '18 at 0:45











          • @IsaacSim Oh, my dear! It is impossible that the batch axis is not present in the tensors. Either you have explicitly set the batch size using batch_input_shape or it is implicitly added by Keras itself.

            – today
            Nov 26 '18 at 9:13
















          0












          0








          0







          First of all, encoder_outputs[-1:] would give you the last batch, not the last output of each batch which is encoder_outputs[:,-1:].



          Second, since you need to pass Keras Tensors to layers in Keras, you need to use a Lambda layer to do the slicing:



          last_input = Lambda(lambda x: x[:,-1:])(encoder_inputs)
          decoder_outputs, _, _ = decoder_lstm(last_input,
          initial_state=encoder_states)





          share|improve this answer













          First of all, encoder_outputs[-1:] would give you the last batch, not the last output of each batch which is encoder_outputs[:,-1:].



          Second, since you need to pass Keras Tensors to layers in Keras, you need to use a Lambda layer to do the slicing:



          last_input = Lambda(lambda x: x[:,-1:])(encoder_inputs)
          decoder_outputs, _, _ = decoder_lstm(last_input,
          initial_state=encoder_states)






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 23 '18 at 12:09









          todaytoday

          11.5k22239




          11.5k22239













          • well, sorry to not inform you that the input type of decoder [does not include] batch size. Input shape is [sequence length x feature length]. So your guess that encoder_outputs[-1:] is the last batch is wrong. But anyways, Lambda seems working although I have to test a few things more. Once I am confident with it, I will come back and accept your answer. Thanks

            – Isaac Sim
            Nov 26 '18 at 0:45











          • @IsaacSim Oh, my dear! It is impossible that the batch axis is not present in the tensors. Either you have explicitly set the batch size using batch_input_shape or it is implicitly added by Keras itself.

            – today
            Nov 26 '18 at 9:13





















          • well, sorry to not inform you that the input type of decoder [does not include] batch size. Input shape is [sequence length x feature length]. So your guess that encoder_outputs[-1:] is the last batch is wrong. But anyways, Lambda seems working although I have to test a few things more. Once I am confident with it, I will come back and accept your answer. Thanks

            – Isaac Sim
            Nov 26 '18 at 0:45











          • @IsaacSim Oh, my dear! It is impossible that the batch axis is not present in the tensors. Either you have explicitly set the batch size using batch_input_shape or it is implicitly added by Keras itself.

            – today
            Nov 26 '18 at 9:13



















          well, sorry to not inform you that the input type of decoder [does not include] batch size. Input shape is [sequence length x feature length]. So your guess that encoder_outputs[-1:] is the last batch is wrong. But anyways, Lambda seems working although I have to test a few things more. Once I am confident with it, I will come back and accept your answer. Thanks

          – Isaac Sim
          Nov 26 '18 at 0:45





          well, sorry to not inform you that the input type of decoder [does not include] batch size. Input shape is [sequence length x feature length]. So your guess that encoder_outputs[-1:] is the last batch is wrong. But anyways, Lambda seems working although I have to test a few things more. Once I am confident with it, I will come back and accept your answer. Thanks

          – Isaac Sim
          Nov 26 '18 at 0:45













          @IsaacSim Oh, my dear! It is impossible that the batch axis is not present in the tensors. Either you have explicitly set the batch size using batch_input_shape or it is implicitly added by Keras itself.

          – today
          Nov 26 '18 at 9:13







          @IsaacSim Oh, my dear! It is impossible that the batch axis is not present in the tensors. Either you have explicitly set the batch size using batch_input_shape or it is implicitly added by Keras itself.

          – today
          Nov 26 '18 at 9:13






















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53443996%2fnonetype-object-has-no-attribute-inbound-nodes-keras-seq2seq-classificati%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

          Alcedinidae

          Origin of the phrase “under your belt”?