Keras with Lambda-Layer












1















I'm relatively new to Keras and I am about to build a Dueling Q-Network to train a KI. I found a code snippet to build a model that surprisingly seems to work. I just have no idea why, because I'm not too familiar with lambda expressions in Keras. Can anybody explain me how exactly the creation of the lambda-layer in the following model works?
Thank you very much in advantage!



def build_model():

model = Sequential()
model.add(Dense(units=16, activation='relu', input_dim = 2))
model.add(Dense(units=32, activation='relu'))
model.add(Dense(units=9, activation='relu'))

#I definitely don't understand how the following layer works:
model.add(Lambda(lambda i: K.expand_dims(i[:,0],-1) + i[:,1:] - K.mean(i[:,1:], keepdims=True), output_shape=(8,)))

model.add(Dense(units=8, activation='linear'))
model.compile(loss='mse',
optimizer = RMSprop(lr=0.001) )
return model









share|improve this question




















  • 1





    Let me know if you need any more help.

    – rayryeng
    Nov 25 '18 at 3:24






  • 1





    Thank you, your below answer totally solved my problems in understanding this thing :)

    – Silvan
    Nov 26 '18 at 10:20
















1















I'm relatively new to Keras and I am about to build a Dueling Q-Network to train a KI. I found a code snippet to build a model that surprisingly seems to work. I just have no idea why, because I'm not too familiar with lambda expressions in Keras. Can anybody explain me how exactly the creation of the lambda-layer in the following model works?
Thank you very much in advantage!



def build_model():

model = Sequential()
model.add(Dense(units=16, activation='relu', input_dim = 2))
model.add(Dense(units=32, activation='relu'))
model.add(Dense(units=9, activation='relu'))

#I definitely don't understand how the following layer works:
model.add(Lambda(lambda i: K.expand_dims(i[:,0],-1) + i[:,1:] - K.mean(i[:,1:], keepdims=True), output_shape=(8,)))

model.add(Dense(units=8, activation='linear'))
model.compile(loss='mse',
optimizer = RMSprop(lr=0.001) )
return model









share|improve this question




















  • 1





    Let me know if you need any more help.

    – rayryeng
    Nov 25 '18 at 3:24






  • 1





    Thank you, your below answer totally solved my problems in understanding this thing :)

    – Silvan
    Nov 26 '18 at 10:20














1












1








1








I'm relatively new to Keras and I am about to build a Dueling Q-Network to train a KI. I found a code snippet to build a model that surprisingly seems to work. I just have no idea why, because I'm not too familiar with lambda expressions in Keras. Can anybody explain me how exactly the creation of the lambda-layer in the following model works?
Thank you very much in advantage!



def build_model():

model = Sequential()
model.add(Dense(units=16, activation='relu', input_dim = 2))
model.add(Dense(units=32, activation='relu'))
model.add(Dense(units=9, activation='relu'))

#I definitely don't understand how the following layer works:
model.add(Lambda(lambda i: K.expand_dims(i[:,0],-1) + i[:,1:] - K.mean(i[:,1:], keepdims=True), output_shape=(8,)))

model.add(Dense(units=8, activation='linear'))
model.compile(loss='mse',
optimizer = RMSprop(lr=0.001) )
return model









share|improve this question
















I'm relatively new to Keras and I am about to build a Dueling Q-Network to train a KI. I found a code snippet to build a model that surprisingly seems to work. I just have no idea why, because I'm not too familiar with lambda expressions in Keras. Can anybody explain me how exactly the creation of the lambda-layer in the following model works?
Thank you very much in advantage!



def build_model():

model = Sequential()
model.add(Dense(units=16, activation='relu', input_dim = 2))
model.add(Dense(units=32, activation='relu'))
model.add(Dense(units=9, activation='relu'))

#I definitely don't understand how the following layer works:
model.add(Lambda(lambda i: K.expand_dims(i[:,0],-1) + i[:,1:] - K.mean(i[:,1:], keepdims=True), output_shape=(8,)))

model.add(Dense(units=8, activation='linear'))
model.compile(loss='mse',
optimizer = RMSprop(lr=0.001) )
return model






python machine-learning lambda keras deep-learning






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 23 '18 at 3:41









rayryeng

83.7k17116143




83.7k17116143










asked Nov 22 '18 at 17:02









SilvanSilvan

537




537








  • 1





    Let me know if you need any more help.

    – rayryeng
    Nov 25 '18 at 3:24






  • 1





    Thank you, your below answer totally solved my problems in understanding this thing :)

    – Silvan
    Nov 26 '18 at 10:20














  • 1





    Let me know if you need any more help.

    – rayryeng
    Nov 25 '18 at 3:24






  • 1





    Thank you, your below answer totally solved my problems in understanding this thing :)

    – Silvan
    Nov 26 '18 at 10:20








1




1





Let me know if you need any more help.

– rayryeng
Nov 25 '18 at 3:24





Let me know if you need any more help.

– rayryeng
Nov 25 '18 at 3:24




1




1





Thank you, your below answer totally solved my problems in understanding this thing :)

– Silvan
Nov 26 '18 at 10:20





Thank you, your below answer totally solved my problems in understanding this thing :)

– Silvan
Nov 26 '18 at 10:20












1 Answer
1






active

oldest

votes


















3














I am not familiar with your particular area of research, but I can tell you what that layer is doing. A Lambda layer is when you want to define a custom operation on the inputs that don't come from anything that is predefined from Keras. Specifically, you want to apply some custom operation to a tensor coming into the layer that Keras doesn't already handle.



The input into the Lambda layer is an anonymous function where the input is the tensor that is coming into this layer. Note however that you can also specify any function or operation into this layer and it doesn't have to be an anonymous function... so long as it operates on the input tensor and produces an output one. You then define the operation you want to perform with this input tensor and it creates a corresponding output tensor that would be fed to the next layer. This behaviour is of course assuming a feedforward network which is what I see here. You can think of an anonymous function as a one time function that would be used to perform operations, but you don't want them lingering around later as you have no need for them anymore after you specify what to do with the input tensor.



lambda i thus denotes that you are creating an anonymous function which the Lambda layer would be operating on the input tensor defined as i. K.expand_dims ensures that you add in singleton dimensions for the purposes of broadcasting. In this case, we wish to take the first column of the input tensor i[:,0] which becomes a 1D array and making sure that the input tensor is a 2D array with a single column (i.e. going from a N, array to a N x 1 array). The -1 argument is the axis you want to expand on. Setting this to -1 simply expands the very last dimension which in this case the last dimension is the first (and the only) dimension.



If you're not used to broadcasting, the addition operation to this expanded array is a bit hard to understand but once you get the hang of it, it's one of the most powerful mechanisms in computing. i[:,1:] slices the input tensor so that we consider the tensor from the second column to the end. Under the hood, adding this sliced tensor with the expanded single column i[:,0] means that this column gets replicated and is added to every single column in i[:,1:] individually.



For example if i[:,0] was [1, 2, 3, 4] and i[:,1:] was [[4, 4, 4, 4], [5, 5, 5, 5], [6, 6, 6, 6], performing K.expand_dims(i[:,0], -1) + i[:,1:] results in [[5, 6, 7, 8], [6, 7, 8, 9], [7, 8, 9, 10]].



The last piece of the puzzle is this: K.mean(i[:,1:], keepdims=True). We take K.expand_dims(i[:,0], -1) + i[:,1:] then subtract this with K.mean(i[:,1:], keepdims=True). K.mean in this context will find the average of all values in the tensor for all the rows from the second column onwards. This is the default behaviour of the operation. Depending on how you use K.mean, one or more of the dimensions may drop. An additional input into K.mean is the axis which allows you to specify which dimensions you want to analyze the average of in the tensor. For example, if you did axis=0, this finds the mean of each column individually.
This would reduce to a 1D tensor of values. With the keepdims keyword, if you specified keepdims=True, this would ensure that the tensor is still 2D with the number of columns being 1 (i.e. a N x 1 tensor instead of a N, tensor). The default behaviour is false.



Therefore, by doing the K.mean operation, we ensure that the final result is 1 x 1 and this value is subtracted from every value of the K.expand_dims(i[:,0],-1) + i[:,1:] result. This is again possible due to broadcasting.



Finally, we make sure that the output shape of this operation gives a 1D tensor of size 8.



tl;dr



This operation is a custom one where we take the first column of an input tensor, add it to all of the other columns from the second column onwards and subtract every value of this result with the average of all of the other columns from the second column onwards. Additionally, we constrain the output size of the tensor so that it is 1D with size 8.






share|improve this answer





















  • 2





    Great! Thorough explanation. However, just one minor thing: "The input into the Lambda layer is an anonymous function". That's not completely true because you can pass any function to a Lambda layer including an anonymous function.

    – today
    Nov 23 '18 at 13:24











  • @today oh right. Thanks. I've changed it. May I get a vote? :)

    – rayryeng
    Nov 23 '18 at 15:54













  • Sure :) Although, I was not the one who asked the question :)

    – today
    Nov 23 '18 at 17:23











  • @today every little bit helps! Thank you anyway!

    – rayryeng
    Nov 24 '18 at 1:17






  • 1





    Great answer! This helped me a lot to understand what I'm doing here. Thank you very much!

    – Silvan
    Nov 26 '18 at 10:19











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53435534%2fkeras-with-lambda-layer%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3














I am not familiar with your particular area of research, but I can tell you what that layer is doing. A Lambda layer is when you want to define a custom operation on the inputs that don't come from anything that is predefined from Keras. Specifically, you want to apply some custom operation to a tensor coming into the layer that Keras doesn't already handle.



The input into the Lambda layer is an anonymous function where the input is the tensor that is coming into this layer. Note however that you can also specify any function or operation into this layer and it doesn't have to be an anonymous function... so long as it operates on the input tensor and produces an output one. You then define the operation you want to perform with this input tensor and it creates a corresponding output tensor that would be fed to the next layer. This behaviour is of course assuming a feedforward network which is what I see here. You can think of an anonymous function as a one time function that would be used to perform operations, but you don't want them lingering around later as you have no need for them anymore after you specify what to do with the input tensor.



lambda i thus denotes that you are creating an anonymous function which the Lambda layer would be operating on the input tensor defined as i. K.expand_dims ensures that you add in singleton dimensions for the purposes of broadcasting. In this case, we wish to take the first column of the input tensor i[:,0] which becomes a 1D array and making sure that the input tensor is a 2D array with a single column (i.e. going from a N, array to a N x 1 array). The -1 argument is the axis you want to expand on. Setting this to -1 simply expands the very last dimension which in this case the last dimension is the first (and the only) dimension.



If you're not used to broadcasting, the addition operation to this expanded array is a bit hard to understand but once you get the hang of it, it's one of the most powerful mechanisms in computing. i[:,1:] slices the input tensor so that we consider the tensor from the second column to the end. Under the hood, adding this sliced tensor with the expanded single column i[:,0] means that this column gets replicated and is added to every single column in i[:,1:] individually.



For example if i[:,0] was [1, 2, 3, 4] and i[:,1:] was [[4, 4, 4, 4], [5, 5, 5, 5], [6, 6, 6, 6], performing K.expand_dims(i[:,0], -1) + i[:,1:] results in [[5, 6, 7, 8], [6, 7, 8, 9], [7, 8, 9, 10]].



The last piece of the puzzle is this: K.mean(i[:,1:], keepdims=True). We take K.expand_dims(i[:,0], -1) + i[:,1:] then subtract this with K.mean(i[:,1:], keepdims=True). K.mean in this context will find the average of all values in the tensor for all the rows from the second column onwards. This is the default behaviour of the operation. Depending on how you use K.mean, one or more of the dimensions may drop. An additional input into K.mean is the axis which allows you to specify which dimensions you want to analyze the average of in the tensor. For example, if you did axis=0, this finds the mean of each column individually.
This would reduce to a 1D tensor of values. With the keepdims keyword, if you specified keepdims=True, this would ensure that the tensor is still 2D with the number of columns being 1 (i.e. a N x 1 tensor instead of a N, tensor). The default behaviour is false.



Therefore, by doing the K.mean operation, we ensure that the final result is 1 x 1 and this value is subtracted from every value of the K.expand_dims(i[:,0],-1) + i[:,1:] result. This is again possible due to broadcasting.



Finally, we make sure that the output shape of this operation gives a 1D tensor of size 8.



tl;dr



This operation is a custom one where we take the first column of an input tensor, add it to all of the other columns from the second column onwards and subtract every value of this result with the average of all of the other columns from the second column onwards. Additionally, we constrain the output size of the tensor so that it is 1D with size 8.






share|improve this answer





















  • 2





    Great! Thorough explanation. However, just one minor thing: "The input into the Lambda layer is an anonymous function". That's not completely true because you can pass any function to a Lambda layer including an anonymous function.

    – today
    Nov 23 '18 at 13:24











  • @today oh right. Thanks. I've changed it. May I get a vote? :)

    – rayryeng
    Nov 23 '18 at 15:54













  • Sure :) Although, I was not the one who asked the question :)

    – today
    Nov 23 '18 at 17:23











  • @today every little bit helps! Thank you anyway!

    – rayryeng
    Nov 24 '18 at 1:17






  • 1





    Great answer! This helped me a lot to understand what I'm doing here. Thank you very much!

    – Silvan
    Nov 26 '18 at 10:19
















3














I am not familiar with your particular area of research, but I can tell you what that layer is doing. A Lambda layer is when you want to define a custom operation on the inputs that don't come from anything that is predefined from Keras. Specifically, you want to apply some custom operation to a tensor coming into the layer that Keras doesn't already handle.



The input into the Lambda layer is an anonymous function where the input is the tensor that is coming into this layer. Note however that you can also specify any function or operation into this layer and it doesn't have to be an anonymous function... so long as it operates on the input tensor and produces an output one. You then define the operation you want to perform with this input tensor and it creates a corresponding output tensor that would be fed to the next layer. This behaviour is of course assuming a feedforward network which is what I see here. You can think of an anonymous function as a one time function that would be used to perform operations, but you don't want them lingering around later as you have no need for them anymore after you specify what to do with the input tensor.



lambda i thus denotes that you are creating an anonymous function which the Lambda layer would be operating on the input tensor defined as i. K.expand_dims ensures that you add in singleton dimensions for the purposes of broadcasting. In this case, we wish to take the first column of the input tensor i[:,0] which becomes a 1D array and making sure that the input tensor is a 2D array with a single column (i.e. going from a N, array to a N x 1 array). The -1 argument is the axis you want to expand on. Setting this to -1 simply expands the very last dimension which in this case the last dimension is the first (and the only) dimension.



If you're not used to broadcasting, the addition operation to this expanded array is a bit hard to understand but once you get the hang of it, it's one of the most powerful mechanisms in computing. i[:,1:] slices the input tensor so that we consider the tensor from the second column to the end. Under the hood, adding this sliced tensor with the expanded single column i[:,0] means that this column gets replicated and is added to every single column in i[:,1:] individually.



For example if i[:,0] was [1, 2, 3, 4] and i[:,1:] was [[4, 4, 4, 4], [5, 5, 5, 5], [6, 6, 6, 6], performing K.expand_dims(i[:,0], -1) + i[:,1:] results in [[5, 6, 7, 8], [6, 7, 8, 9], [7, 8, 9, 10]].



The last piece of the puzzle is this: K.mean(i[:,1:], keepdims=True). We take K.expand_dims(i[:,0], -1) + i[:,1:] then subtract this with K.mean(i[:,1:], keepdims=True). K.mean in this context will find the average of all values in the tensor for all the rows from the second column onwards. This is the default behaviour of the operation. Depending on how you use K.mean, one or more of the dimensions may drop. An additional input into K.mean is the axis which allows you to specify which dimensions you want to analyze the average of in the tensor. For example, if you did axis=0, this finds the mean of each column individually.
This would reduce to a 1D tensor of values. With the keepdims keyword, if you specified keepdims=True, this would ensure that the tensor is still 2D with the number of columns being 1 (i.e. a N x 1 tensor instead of a N, tensor). The default behaviour is false.



Therefore, by doing the K.mean operation, we ensure that the final result is 1 x 1 and this value is subtracted from every value of the K.expand_dims(i[:,0],-1) + i[:,1:] result. This is again possible due to broadcasting.



Finally, we make sure that the output shape of this operation gives a 1D tensor of size 8.



tl;dr



This operation is a custom one where we take the first column of an input tensor, add it to all of the other columns from the second column onwards and subtract every value of this result with the average of all of the other columns from the second column onwards. Additionally, we constrain the output size of the tensor so that it is 1D with size 8.






share|improve this answer





















  • 2





    Great! Thorough explanation. However, just one minor thing: "The input into the Lambda layer is an anonymous function". That's not completely true because you can pass any function to a Lambda layer including an anonymous function.

    – today
    Nov 23 '18 at 13:24











  • @today oh right. Thanks. I've changed it. May I get a vote? :)

    – rayryeng
    Nov 23 '18 at 15:54













  • Sure :) Although, I was not the one who asked the question :)

    – today
    Nov 23 '18 at 17:23











  • @today every little bit helps! Thank you anyway!

    – rayryeng
    Nov 24 '18 at 1:17






  • 1





    Great answer! This helped me a lot to understand what I'm doing here. Thank you very much!

    – Silvan
    Nov 26 '18 at 10:19














3












3








3







I am not familiar with your particular area of research, but I can tell you what that layer is doing. A Lambda layer is when you want to define a custom operation on the inputs that don't come from anything that is predefined from Keras. Specifically, you want to apply some custom operation to a tensor coming into the layer that Keras doesn't already handle.



The input into the Lambda layer is an anonymous function where the input is the tensor that is coming into this layer. Note however that you can also specify any function or operation into this layer and it doesn't have to be an anonymous function... so long as it operates on the input tensor and produces an output one. You then define the operation you want to perform with this input tensor and it creates a corresponding output tensor that would be fed to the next layer. This behaviour is of course assuming a feedforward network which is what I see here. You can think of an anonymous function as a one time function that would be used to perform operations, but you don't want them lingering around later as you have no need for them anymore after you specify what to do with the input tensor.



lambda i thus denotes that you are creating an anonymous function which the Lambda layer would be operating on the input tensor defined as i. K.expand_dims ensures that you add in singleton dimensions for the purposes of broadcasting. In this case, we wish to take the first column of the input tensor i[:,0] which becomes a 1D array and making sure that the input tensor is a 2D array with a single column (i.e. going from a N, array to a N x 1 array). The -1 argument is the axis you want to expand on. Setting this to -1 simply expands the very last dimension which in this case the last dimension is the first (and the only) dimension.



If you're not used to broadcasting, the addition operation to this expanded array is a bit hard to understand but once you get the hang of it, it's one of the most powerful mechanisms in computing. i[:,1:] slices the input tensor so that we consider the tensor from the second column to the end. Under the hood, adding this sliced tensor with the expanded single column i[:,0] means that this column gets replicated and is added to every single column in i[:,1:] individually.



For example if i[:,0] was [1, 2, 3, 4] and i[:,1:] was [[4, 4, 4, 4], [5, 5, 5, 5], [6, 6, 6, 6], performing K.expand_dims(i[:,0], -1) + i[:,1:] results in [[5, 6, 7, 8], [6, 7, 8, 9], [7, 8, 9, 10]].



The last piece of the puzzle is this: K.mean(i[:,1:], keepdims=True). We take K.expand_dims(i[:,0], -1) + i[:,1:] then subtract this with K.mean(i[:,1:], keepdims=True). K.mean in this context will find the average of all values in the tensor for all the rows from the second column onwards. This is the default behaviour of the operation. Depending on how you use K.mean, one or more of the dimensions may drop. An additional input into K.mean is the axis which allows you to specify which dimensions you want to analyze the average of in the tensor. For example, if you did axis=0, this finds the mean of each column individually.
This would reduce to a 1D tensor of values. With the keepdims keyword, if you specified keepdims=True, this would ensure that the tensor is still 2D with the number of columns being 1 (i.e. a N x 1 tensor instead of a N, tensor). The default behaviour is false.



Therefore, by doing the K.mean operation, we ensure that the final result is 1 x 1 and this value is subtracted from every value of the K.expand_dims(i[:,0],-1) + i[:,1:] result. This is again possible due to broadcasting.



Finally, we make sure that the output shape of this operation gives a 1D tensor of size 8.



tl;dr



This operation is a custom one where we take the first column of an input tensor, add it to all of the other columns from the second column onwards and subtract every value of this result with the average of all of the other columns from the second column onwards. Additionally, we constrain the output size of the tensor so that it is 1D with size 8.






share|improve this answer















I am not familiar with your particular area of research, but I can tell you what that layer is doing. A Lambda layer is when you want to define a custom operation on the inputs that don't come from anything that is predefined from Keras. Specifically, you want to apply some custom operation to a tensor coming into the layer that Keras doesn't already handle.



The input into the Lambda layer is an anonymous function where the input is the tensor that is coming into this layer. Note however that you can also specify any function or operation into this layer and it doesn't have to be an anonymous function... so long as it operates on the input tensor and produces an output one. You then define the operation you want to perform with this input tensor and it creates a corresponding output tensor that would be fed to the next layer. This behaviour is of course assuming a feedforward network which is what I see here. You can think of an anonymous function as a one time function that would be used to perform operations, but you don't want them lingering around later as you have no need for them anymore after you specify what to do with the input tensor.



lambda i thus denotes that you are creating an anonymous function which the Lambda layer would be operating on the input tensor defined as i. K.expand_dims ensures that you add in singleton dimensions for the purposes of broadcasting. In this case, we wish to take the first column of the input tensor i[:,0] which becomes a 1D array and making sure that the input tensor is a 2D array with a single column (i.e. going from a N, array to a N x 1 array). The -1 argument is the axis you want to expand on. Setting this to -1 simply expands the very last dimension which in this case the last dimension is the first (and the only) dimension.



If you're not used to broadcasting, the addition operation to this expanded array is a bit hard to understand but once you get the hang of it, it's one of the most powerful mechanisms in computing. i[:,1:] slices the input tensor so that we consider the tensor from the second column to the end. Under the hood, adding this sliced tensor with the expanded single column i[:,0] means that this column gets replicated and is added to every single column in i[:,1:] individually.



For example if i[:,0] was [1, 2, 3, 4] and i[:,1:] was [[4, 4, 4, 4], [5, 5, 5, 5], [6, 6, 6, 6], performing K.expand_dims(i[:,0], -1) + i[:,1:] results in [[5, 6, 7, 8], [6, 7, 8, 9], [7, 8, 9, 10]].



The last piece of the puzzle is this: K.mean(i[:,1:], keepdims=True). We take K.expand_dims(i[:,0], -1) + i[:,1:] then subtract this with K.mean(i[:,1:], keepdims=True). K.mean in this context will find the average of all values in the tensor for all the rows from the second column onwards. This is the default behaviour of the operation. Depending on how you use K.mean, one or more of the dimensions may drop. An additional input into K.mean is the axis which allows you to specify which dimensions you want to analyze the average of in the tensor. For example, if you did axis=0, this finds the mean of each column individually.
This would reduce to a 1D tensor of values. With the keepdims keyword, if you specified keepdims=True, this would ensure that the tensor is still 2D with the number of columns being 1 (i.e. a N x 1 tensor instead of a N, tensor). The default behaviour is false.



Therefore, by doing the K.mean operation, we ensure that the final result is 1 x 1 and this value is subtracted from every value of the K.expand_dims(i[:,0],-1) + i[:,1:] result. This is again possible due to broadcasting.



Finally, we make sure that the output shape of this operation gives a 1D tensor of size 8.



tl;dr



This operation is a custom one where we take the first column of an input tensor, add it to all of the other columns from the second column onwards and subtract every value of this result with the average of all of the other columns from the second column onwards. Additionally, we constrain the output size of the tensor so that it is 1D with size 8.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 26 '18 at 18:05

























answered Nov 23 '18 at 3:37









rayryengrayryeng

83.7k17116143




83.7k17116143








  • 2





    Great! Thorough explanation. However, just one minor thing: "The input into the Lambda layer is an anonymous function". That's not completely true because you can pass any function to a Lambda layer including an anonymous function.

    – today
    Nov 23 '18 at 13:24











  • @today oh right. Thanks. I've changed it. May I get a vote? :)

    – rayryeng
    Nov 23 '18 at 15:54













  • Sure :) Although, I was not the one who asked the question :)

    – today
    Nov 23 '18 at 17:23











  • @today every little bit helps! Thank you anyway!

    – rayryeng
    Nov 24 '18 at 1:17






  • 1





    Great answer! This helped me a lot to understand what I'm doing here. Thank you very much!

    – Silvan
    Nov 26 '18 at 10:19














  • 2





    Great! Thorough explanation. However, just one minor thing: "The input into the Lambda layer is an anonymous function". That's not completely true because you can pass any function to a Lambda layer including an anonymous function.

    – today
    Nov 23 '18 at 13:24











  • @today oh right. Thanks. I've changed it. May I get a vote? :)

    – rayryeng
    Nov 23 '18 at 15:54













  • Sure :) Although, I was not the one who asked the question :)

    – today
    Nov 23 '18 at 17:23











  • @today every little bit helps! Thank you anyway!

    – rayryeng
    Nov 24 '18 at 1:17






  • 1





    Great answer! This helped me a lot to understand what I'm doing here. Thank you very much!

    – Silvan
    Nov 26 '18 at 10:19








2




2





Great! Thorough explanation. However, just one minor thing: "The input into the Lambda layer is an anonymous function". That's not completely true because you can pass any function to a Lambda layer including an anonymous function.

– today
Nov 23 '18 at 13:24





Great! Thorough explanation. However, just one minor thing: "The input into the Lambda layer is an anonymous function". That's not completely true because you can pass any function to a Lambda layer including an anonymous function.

– today
Nov 23 '18 at 13:24













@today oh right. Thanks. I've changed it. May I get a vote? :)

– rayryeng
Nov 23 '18 at 15:54







@today oh right. Thanks. I've changed it. May I get a vote? :)

– rayryeng
Nov 23 '18 at 15:54















Sure :) Although, I was not the one who asked the question :)

– today
Nov 23 '18 at 17:23





Sure :) Although, I was not the one who asked the question :)

– today
Nov 23 '18 at 17:23













@today every little bit helps! Thank you anyway!

– rayryeng
Nov 24 '18 at 1:17





@today every little bit helps! Thank you anyway!

– rayryeng
Nov 24 '18 at 1:17




1




1





Great answer! This helped me a lot to understand what I'm doing here. Thank you very much!

– Silvan
Nov 26 '18 at 10:19





Great answer! This helped me a lot to understand what I'm doing here. Thank you very much!

– Silvan
Nov 26 '18 at 10:19




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53435534%2fkeras-with-lambda-layer%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

If I really need a card on my start hand, how many mulligans make sense? [duplicate]

Alcedinidae

Can an atomic nucleus contain both particles and antiparticles? [duplicate]