LSTM input shape for multivariate time series?











up vote
0
down vote

favorite












I know this question is asked many times, but I truly can't fix this input shape issue for my case.




My x_train shape == (5523000, 13) // (13 timeseries of length 5523000)



My y_train shape == (5523000, 1)



number of classes == 2




To reshape the x_train and y_train:



x_train= x_train.values.reshape(27615,200,13)  # 5523000/200 = 27615
y_train= y_train.values.reshape((5523000,1)) # I know I have a problem here but I dont know how to fix it


Here is my lstm network :



def lstm_baseline(x_train, y_train):
batch_size=200
model = Sequential()
model.add(LSTM(batch_size, input_shape=(27615,200,13),
activation='relu', return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.1))

model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))

model.add(Dense(1, activation='softmax'))

model.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

model.fit(x_train,y_train, epochs= 15)

return model


Whenever I run the code I get this error :




ValueError: Input 0 is incompatible with layer lstm_10: expected
ndim=3, found ndim=4




My question is what I am missing here?




PS: The idea of the project is that I have 13 signals coming from the 13 points of the human body, I want to use them to detect a certain type of diseases (an arousal). By using the LSTM, I want my model to locate the regions where I have that arousal based on these 13 signals.




.




The whole data is 993 patients, for each one I use 13 signals to detect the disorder regions.




if you want me to put the data in 3D dimensions:



(500000 ,13, 993) # (nb_recods, nb_signals, nb_patient)




for each patient I have 500000 observations of 13 signals.
nb_patient is 993




It worth noting that the 500000 size doesn't matter ! as i can have patients with more observations or less than that.



Update: here is a sample data of one patient.



Here is a chunk of my data first 2000 rows










share|improve this question
























  • What is exactly your input data? Why are you reshaping your data like this x_train= x_train.values.reshape(27615,200,13). Please provide a little bit more context and, ideally, some x_train and y_train examples (just 2 or 3).
    – abeagomez
    Nov 20 at 21:49















up vote
0
down vote

favorite












I know this question is asked many times, but I truly can't fix this input shape issue for my case.




My x_train shape == (5523000, 13) // (13 timeseries of length 5523000)



My y_train shape == (5523000, 1)



number of classes == 2




To reshape the x_train and y_train:



x_train= x_train.values.reshape(27615,200,13)  # 5523000/200 = 27615
y_train= y_train.values.reshape((5523000,1)) # I know I have a problem here but I dont know how to fix it


Here is my lstm network :



def lstm_baseline(x_train, y_train):
batch_size=200
model = Sequential()
model.add(LSTM(batch_size, input_shape=(27615,200,13),
activation='relu', return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.1))

model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))

model.add(Dense(1, activation='softmax'))

model.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

model.fit(x_train,y_train, epochs= 15)

return model


Whenever I run the code I get this error :




ValueError: Input 0 is incompatible with layer lstm_10: expected
ndim=3, found ndim=4




My question is what I am missing here?




PS: The idea of the project is that I have 13 signals coming from the 13 points of the human body, I want to use them to detect a certain type of diseases (an arousal). By using the LSTM, I want my model to locate the regions where I have that arousal based on these 13 signals.




.




The whole data is 993 patients, for each one I use 13 signals to detect the disorder regions.




if you want me to put the data in 3D dimensions:



(500000 ,13, 993) # (nb_recods, nb_signals, nb_patient)




for each patient I have 500000 observations of 13 signals.
nb_patient is 993




It worth noting that the 500000 size doesn't matter ! as i can have patients with more observations or less than that.



Update: here is a sample data of one patient.



Here is a chunk of my data first 2000 rows










share|improve this question
























  • What is exactly your input data? Why are you reshaping your data like this x_train= x_train.values.reshape(27615,200,13). Please provide a little bit more context and, ideally, some x_train and y_train examples (just 2 or 3).
    – abeagomez
    Nov 20 at 21:49













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I know this question is asked many times, but I truly can't fix this input shape issue for my case.




My x_train shape == (5523000, 13) // (13 timeseries of length 5523000)



My y_train shape == (5523000, 1)



number of classes == 2




To reshape the x_train and y_train:



x_train= x_train.values.reshape(27615,200,13)  # 5523000/200 = 27615
y_train= y_train.values.reshape((5523000,1)) # I know I have a problem here but I dont know how to fix it


Here is my lstm network :



def lstm_baseline(x_train, y_train):
batch_size=200
model = Sequential()
model.add(LSTM(batch_size, input_shape=(27615,200,13),
activation='relu', return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.1))

model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))

model.add(Dense(1, activation='softmax'))

model.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

model.fit(x_train,y_train, epochs= 15)

return model


Whenever I run the code I get this error :




ValueError: Input 0 is incompatible with layer lstm_10: expected
ndim=3, found ndim=4




My question is what I am missing here?




PS: The idea of the project is that I have 13 signals coming from the 13 points of the human body, I want to use them to detect a certain type of diseases (an arousal). By using the LSTM, I want my model to locate the regions where I have that arousal based on these 13 signals.




.




The whole data is 993 patients, for each one I use 13 signals to detect the disorder regions.




if you want me to put the data in 3D dimensions:



(500000 ,13, 993) # (nb_recods, nb_signals, nb_patient)




for each patient I have 500000 observations of 13 signals.
nb_patient is 993




It worth noting that the 500000 size doesn't matter ! as i can have patients with more observations or less than that.



Update: here is a sample data of one patient.



Here is a chunk of my data first 2000 rows










share|improve this question















I know this question is asked many times, but I truly can't fix this input shape issue for my case.




My x_train shape == (5523000, 13) // (13 timeseries of length 5523000)



My y_train shape == (5523000, 1)



number of classes == 2




To reshape the x_train and y_train:



x_train= x_train.values.reshape(27615,200,13)  # 5523000/200 = 27615
y_train= y_train.values.reshape((5523000,1)) # I know I have a problem here but I dont know how to fix it


Here is my lstm network :



def lstm_baseline(x_train, y_train):
batch_size=200
model = Sequential()
model.add(LSTM(batch_size, input_shape=(27615,200,13),
activation='relu', return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.1))

model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))

model.add(Dense(1, activation='softmax'))

model.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

model.fit(x_train,y_train, epochs= 15)

return model


Whenever I run the code I get this error :




ValueError: Input 0 is incompatible with layer lstm_10: expected
ndim=3, found ndim=4




My question is what I am missing here?




PS: The idea of the project is that I have 13 signals coming from the 13 points of the human body, I want to use them to detect a certain type of diseases (an arousal). By using the LSTM, I want my model to locate the regions where I have that arousal based on these 13 signals.




.




The whole data is 993 patients, for each one I use 13 signals to detect the disorder regions.




if you want me to put the data in 3D dimensions:



(500000 ,13, 993) # (nb_recods, nb_signals, nb_patient)




for each patient I have 500000 observations of 13 signals.
nb_patient is 993




It worth noting that the 500000 size doesn't matter ! as i can have patients with more observations or less than that.



Update: here is a sample data of one patient.



Here is a chunk of my data first 2000 rows







python machine-learning neural-network data-mining lstm






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 20 at 23:45

























asked Nov 19 at 20:43









CHAMI Soufiane

147111




147111












  • What is exactly your input data? Why are you reshaping your data like this x_train= x_train.values.reshape(27615,200,13). Please provide a little bit more context and, ideally, some x_train and y_train examples (just 2 or 3).
    – abeagomez
    Nov 20 at 21:49


















  • What is exactly your input data? Why are you reshaping your data like this x_train= x_train.values.reshape(27615,200,13). Please provide a little bit more context and, ideally, some x_train and y_train examples (just 2 or 3).
    – abeagomez
    Nov 20 at 21:49
















What is exactly your input data? Why are you reshaping your data like this x_train= x_train.values.reshape(27615,200,13). Please provide a little bit more context and, ideally, some x_train and y_train examples (just 2 or 3).
– abeagomez
Nov 20 at 21:49




What is exactly your input data? Why are you reshaping your data like this x_train= x_train.values.reshape(27615,200,13). Please provide a little bit more context and, ideally, some x_train and y_train examples (just 2 or 3).
– abeagomez
Nov 20 at 21:49












2 Answers
2






active

oldest

votes

















up vote
1
down vote













You may try some modifications like this below:




x_train = x_train.reshape(1999, 1, 13)
# double-check dimensions
x_train.shape

def lstm_baseline(x_train, y_train, batch_size):
model = Sequential()
model.add(LSTM(batch_size, input_shape=(None, 13),
activation='relu', return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.1))

model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))

model.add(Dense(1, activation='softmax'))

model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])

return model





share|improve this answer




























    up vote
    1
    down vote













    Ok, I did some changes to your code. First, I still don't now what the "200" in your attempt to reshape your data means, so I'm gonna give you a working code and let's see if you can use it or you can modify it to make your code work. The size of your input data and your targets, have to match. You can not have an input x_train with 27615 rows (which is the meaning of x_train[0] = 27615) and a target set y_train with 5523000 values.



    I took the first two rows from the data example that you provided for this example:



    x_sample = [[-17,  -7, -7,  0, -5, -18, 73, 9, -282, 28550, 67],
    [-21, -16, -7, -6, -8, 15, 60, 6, -239, 28550, 94]]

    y_sample = [0, 0]


    Let's reshape x_sample:



    x_train = np.array(example)

    #Here x_train.shape = (2,11), we want to reshape it to (2,11,1) to
    #fit the network's input dimension
    x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1)


    You are using a categorical loss, so you have to change your targets to categorical (chek https://keras.io/utils/)



    y_train = np.array(target)
    y_train = to_categorical(y_train, 2)


    Now you have two categories, I assumed two categories as in the data that you provided all the targets values are 0, so I don't know how many possible values your target can take. If your target can take 4 possible values, then the number of categories in the to_categorical function will be 4. Every output of your last dense layer will represent a category and the value of that output, the probability of your input to belong to that category.



    Now, we just have to slightly modify your LSTM model:



    def lstm_baseline(x_train, y_train):
    batch_size = 200
    model = Sequential()
    #We are gonna change input shape for input_dim
    model.add(LSTM(batch_size, input_dim=1,
    activation='relu', return_sequences=True))
    model.add(Dropout(0.2))

    model.add(LSTM(128, activation='relu'))
    model.add(Dropout(0.1))

    model.add(Dense(32, activation='relu'))
    model.add(Dropout(0.2))

    #We are gonna set the number of outputs to 2, to match with the
    #number of categories
    model.add(Dense(2, activation='softmax'))

    model.compile(
    loss='categorical_crossentropy',
    optimizer='rmsprop',
    metrics=['accuracy'])

    model.fit(x_train, y_train, epochs=15)

    return model





    share|improve this answer





















      Your Answer






      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "1"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53382353%2flstm-input-shape-for-multivariate-time-series%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      1
      down vote













      You may try some modifications like this below:




      x_train = x_train.reshape(1999, 1, 13)
      # double-check dimensions
      x_train.shape

      def lstm_baseline(x_train, y_train, batch_size):
      model = Sequential()
      model.add(LSTM(batch_size, input_shape=(None, 13),
      activation='relu', return_sequences=True))
      model.add(Dropout(0.2))

      model.add(LSTM(128, activation='relu'))
      model.add(Dropout(0.1))

      model.add(Dense(32, activation='relu'))
      model.add(Dropout(0.2))

      model.add(Dense(1, activation='softmax'))

      model.compile(
      loss='binary_crossentropy',
      optimizer='adam',
      metrics=['accuracy'])

      return model





      share|improve this answer

























        up vote
        1
        down vote













        You may try some modifications like this below:




        x_train = x_train.reshape(1999, 1, 13)
        # double-check dimensions
        x_train.shape

        def lstm_baseline(x_train, y_train, batch_size):
        model = Sequential()
        model.add(LSTM(batch_size, input_shape=(None, 13),
        activation='relu', return_sequences=True))
        model.add(Dropout(0.2))

        model.add(LSTM(128, activation='relu'))
        model.add(Dropout(0.1))

        model.add(Dense(32, activation='relu'))
        model.add(Dropout(0.2))

        model.add(Dense(1, activation='softmax'))

        model.compile(
        loss='binary_crossentropy',
        optimizer='adam',
        metrics=['accuracy'])

        return model





        share|improve this answer























          up vote
          1
          down vote










          up vote
          1
          down vote









          You may try some modifications like this below:




          x_train = x_train.reshape(1999, 1, 13)
          # double-check dimensions
          x_train.shape

          def lstm_baseline(x_train, y_train, batch_size):
          model = Sequential()
          model.add(LSTM(batch_size, input_shape=(None, 13),
          activation='relu', return_sequences=True))
          model.add(Dropout(0.2))

          model.add(LSTM(128, activation='relu'))
          model.add(Dropout(0.1))

          model.add(Dense(32, activation='relu'))
          model.add(Dropout(0.2))

          model.add(Dense(1, activation='softmax'))

          model.compile(
          loss='binary_crossentropy',
          optimizer='adam',
          metrics=['accuracy'])

          return model





          share|improve this answer












          You may try some modifications like this below:




          x_train = x_train.reshape(1999, 1, 13)
          # double-check dimensions
          x_train.shape

          def lstm_baseline(x_train, y_train, batch_size):
          model = Sequential()
          model.add(LSTM(batch_size, input_shape=(None, 13),
          activation='relu', return_sequences=True))
          model.add(Dropout(0.2))

          model.add(LSTM(128, activation='relu'))
          model.add(Dropout(0.1))

          model.add(Dense(32, activation='relu'))
          model.add(Dropout(0.2))

          model.add(Dense(1, activation='softmax'))

          model.compile(
          loss='binary_crossentropy',
          optimizer='adam',
          metrics=['accuracy'])

          return model






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 20 at 22:01









          user_dhrn

          1105




          1105
























              up vote
              1
              down vote













              Ok, I did some changes to your code. First, I still don't now what the "200" in your attempt to reshape your data means, so I'm gonna give you a working code and let's see if you can use it or you can modify it to make your code work. The size of your input data and your targets, have to match. You can not have an input x_train with 27615 rows (which is the meaning of x_train[0] = 27615) and a target set y_train with 5523000 values.



              I took the first two rows from the data example that you provided for this example:



              x_sample = [[-17,  -7, -7,  0, -5, -18, 73, 9, -282, 28550, 67],
              [-21, -16, -7, -6, -8, 15, 60, 6, -239, 28550, 94]]

              y_sample = [0, 0]


              Let's reshape x_sample:



              x_train = np.array(example)

              #Here x_train.shape = (2,11), we want to reshape it to (2,11,1) to
              #fit the network's input dimension
              x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1)


              You are using a categorical loss, so you have to change your targets to categorical (chek https://keras.io/utils/)



              y_train = np.array(target)
              y_train = to_categorical(y_train, 2)


              Now you have two categories, I assumed two categories as in the data that you provided all the targets values are 0, so I don't know how many possible values your target can take. If your target can take 4 possible values, then the number of categories in the to_categorical function will be 4. Every output of your last dense layer will represent a category and the value of that output, the probability of your input to belong to that category.



              Now, we just have to slightly modify your LSTM model:



              def lstm_baseline(x_train, y_train):
              batch_size = 200
              model = Sequential()
              #We are gonna change input shape for input_dim
              model.add(LSTM(batch_size, input_dim=1,
              activation='relu', return_sequences=True))
              model.add(Dropout(0.2))

              model.add(LSTM(128, activation='relu'))
              model.add(Dropout(0.1))

              model.add(Dense(32, activation='relu'))
              model.add(Dropout(0.2))

              #We are gonna set the number of outputs to 2, to match with the
              #number of categories
              model.add(Dense(2, activation='softmax'))

              model.compile(
              loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

              model.fit(x_train, y_train, epochs=15)

              return model





              share|improve this answer

























                up vote
                1
                down vote













                Ok, I did some changes to your code. First, I still don't now what the "200" in your attempt to reshape your data means, so I'm gonna give you a working code and let's see if you can use it or you can modify it to make your code work. The size of your input data and your targets, have to match. You can not have an input x_train with 27615 rows (which is the meaning of x_train[0] = 27615) and a target set y_train with 5523000 values.



                I took the first two rows from the data example that you provided for this example:



                x_sample = [[-17,  -7, -7,  0, -5, -18, 73, 9, -282, 28550, 67],
                [-21, -16, -7, -6, -8, 15, 60, 6, -239, 28550, 94]]

                y_sample = [0, 0]


                Let's reshape x_sample:



                x_train = np.array(example)

                #Here x_train.shape = (2,11), we want to reshape it to (2,11,1) to
                #fit the network's input dimension
                x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1)


                You are using a categorical loss, so you have to change your targets to categorical (chek https://keras.io/utils/)



                y_train = np.array(target)
                y_train = to_categorical(y_train, 2)


                Now you have two categories, I assumed two categories as in the data that you provided all the targets values are 0, so I don't know how many possible values your target can take. If your target can take 4 possible values, then the number of categories in the to_categorical function will be 4. Every output of your last dense layer will represent a category and the value of that output, the probability of your input to belong to that category.



                Now, we just have to slightly modify your LSTM model:



                def lstm_baseline(x_train, y_train):
                batch_size = 200
                model = Sequential()
                #We are gonna change input shape for input_dim
                model.add(LSTM(batch_size, input_dim=1,
                activation='relu', return_sequences=True))
                model.add(Dropout(0.2))

                model.add(LSTM(128, activation='relu'))
                model.add(Dropout(0.1))

                model.add(Dense(32, activation='relu'))
                model.add(Dropout(0.2))

                #We are gonna set the number of outputs to 2, to match with the
                #number of categories
                model.add(Dense(2, activation='softmax'))

                model.compile(
                loss='categorical_crossentropy',
                optimizer='rmsprop',
                metrics=['accuracy'])

                model.fit(x_train, y_train, epochs=15)

                return model





                share|improve this answer























                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  Ok, I did some changes to your code. First, I still don't now what the "200" in your attempt to reshape your data means, so I'm gonna give you a working code and let's see if you can use it or you can modify it to make your code work. The size of your input data and your targets, have to match. You can not have an input x_train with 27615 rows (which is the meaning of x_train[0] = 27615) and a target set y_train with 5523000 values.



                  I took the first two rows from the data example that you provided for this example:



                  x_sample = [[-17,  -7, -7,  0, -5, -18, 73, 9, -282, 28550, 67],
                  [-21, -16, -7, -6, -8, 15, 60, 6, -239, 28550, 94]]

                  y_sample = [0, 0]


                  Let's reshape x_sample:



                  x_train = np.array(example)

                  #Here x_train.shape = (2,11), we want to reshape it to (2,11,1) to
                  #fit the network's input dimension
                  x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1)


                  You are using a categorical loss, so you have to change your targets to categorical (chek https://keras.io/utils/)



                  y_train = np.array(target)
                  y_train = to_categorical(y_train, 2)


                  Now you have two categories, I assumed two categories as in the data that you provided all the targets values are 0, so I don't know how many possible values your target can take. If your target can take 4 possible values, then the number of categories in the to_categorical function will be 4. Every output of your last dense layer will represent a category and the value of that output, the probability of your input to belong to that category.



                  Now, we just have to slightly modify your LSTM model:



                  def lstm_baseline(x_train, y_train):
                  batch_size = 200
                  model = Sequential()
                  #We are gonna change input shape for input_dim
                  model.add(LSTM(batch_size, input_dim=1,
                  activation='relu', return_sequences=True))
                  model.add(Dropout(0.2))

                  model.add(LSTM(128, activation='relu'))
                  model.add(Dropout(0.1))

                  model.add(Dense(32, activation='relu'))
                  model.add(Dropout(0.2))

                  #We are gonna set the number of outputs to 2, to match with the
                  #number of categories
                  model.add(Dense(2, activation='softmax'))

                  model.compile(
                  loss='categorical_crossentropy',
                  optimizer='rmsprop',
                  metrics=['accuracy'])

                  model.fit(x_train, y_train, epochs=15)

                  return model





                  share|improve this answer












                  Ok, I did some changes to your code. First, I still don't now what the "200" in your attempt to reshape your data means, so I'm gonna give you a working code and let's see if you can use it or you can modify it to make your code work. The size of your input data and your targets, have to match. You can not have an input x_train with 27615 rows (which is the meaning of x_train[0] = 27615) and a target set y_train with 5523000 values.



                  I took the first two rows from the data example that you provided for this example:



                  x_sample = [[-17,  -7, -7,  0, -5, -18, 73, 9, -282, 28550, 67],
                  [-21, -16, -7, -6, -8, 15, 60, 6, -239, 28550, 94]]

                  y_sample = [0, 0]


                  Let's reshape x_sample:



                  x_train = np.array(example)

                  #Here x_train.shape = (2,11), we want to reshape it to (2,11,1) to
                  #fit the network's input dimension
                  x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1)


                  You are using a categorical loss, so you have to change your targets to categorical (chek https://keras.io/utils/)



                  y_train = np.array(target)
                  y_train = to_categorical(y_train, 2)


                  Now you have two categories, I assumed two categories as in the data that you provided all the targets values are 0, so I don't know how many possible values your target can take. If your target can take 4 possible values, then the number of categories in the to_categorical function will be 4. Every output of your last dense layer will represent a category and the value of that output, the probability of your input to belong to that category.



                  Now, we just have to slightly modify your LSTM model:



                  def lstm_baseline(x_train, y_train):
                  batch_size = 200
                  model = Sequential()
                  #We are gonna change input shape for input_dim
                  model.add(LSTM(batch_size, input_dim=1,
                  activation='relu', return_sequences=True))
                  model.add(Dropout(0.2))

                  model.add(LSTM(128, activation='relu'))
                  model.add(Dropout(0.1))

                  model.add(Dense(32, activation='relu'))
                  model.add(Dropout(0.2))

                  #We are gonna set the number of outputs to 2, to match with the
                  #number of categories
                  model.add(Dense(2, activation='softmax'))

                  model.compile(
                  loss='categorical_crossentropy',
                  optimizer='rmsprop',
                  metrics=['accuracy'])

                  model.fit(x_train, y_train, epochs=15)

                  return model






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 20 at 22:27









                  abeagomez

                  5410




                  5410






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53382353%2flstm-input-shape-for-multivariate-time-series%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      If I really need a card on my start hand, how many mulligans make sense? [duplicate]

                      Alcedinidae

                      Can an atomic nucleus contain both particles and antiparticles? [duplicate]