tensorflow run any opmtimizer get exit code 139 interrupted by signal 11: SIGSEGV











up vote
0
down vote

favorite












i run the coed with rtx2080 ,using docker.
Once i call sess.run(train_step,feed_dick={}),i get "the Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)" .But if i run it with cpu ,it works well.
i have no idea what happened .



Using TensorFlow backend.




2018-11-18 13:19:12.025412: I
tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX2
FMA 2018-11-18 13:19:12.132999: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero 2018-11-18
13:19:12.133566: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
with properties: name: GeForce RTX 2080 major: 7 minor: 5
memoryClockRate(GHz): 1.8 pciBusID: 0000:06:00.0 totalMemory: 7.76GiB
freeMemory: 7.46GiB 2018-11-18 13:19:12.133584: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
gpu devices: 0 2018-11-18 13:19:12.394726: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
interconnect StreamExecutor with strength 1 edge matrix: 2018-11-18
13:19:12.394763: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2018-11-18 13:19:12.394770: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2018-11-18 13:19:12.394963: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with
7172 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080,
pci bus id: 0000:06:00.0, compute capability: 7.5)




import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow.contrib.framework import arg_scope
from keras.layers import Dense, Activation
import pickle
from tensorflow.contrib.layers import batch_norm, flatten

train_data = {b'data': , b'labels': }
# 加载训练数据
for i in range(5):
with open("data/cifar-10/data_batch_" + str(i + 1), mode='rb') as file:
data = pickle.load(file, encoding='bytes')
train_data[b'data'] += list(data[b'data'])
train_data[b'labels'] += data[b'labels']
# 加载测试数据
with open("data/cifar-10/test_batch", mode='rb') as file:
test_data = pickle.load(file, encoding='bytes')
# 定义一些变量
NUM_LABLES = 10 # 分类结果为10类
BATCH_SIZE = 64 # 每次训练batch数

sess = tf.InteractiveSession()


# 权重初始化
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=2 / shape[0] / shape[1] / shape[2])
# initial = tf.truncated_normal(shape, stddev=0.01)
return tf.Variable(initial)


# 卷积层偏置初始化为常数0.1
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)


# 定义卷积操作,卷积步长为1. padding = 'SAME' 表示全0填充
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')


# 定义最大池化操作,尺寸为2,步长为2,全0填充
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

# 对输入进行占位操作,输入为BATCH*3072向量,输出为BATCH*10向量
x = tf.placeholder(tf.float32, [None, 3072])
y_ = tf.placeholder(tf.float32, [None, NUM_LABLES])
# 对输入进行reshape,转换成3*32*32格式
x_image = tf.reshape(x, [-1, 3, 32, 32])
# 转置操作,转换成滤波器做卷积所需格式:32*32*3,32*32为其二维卷积操作维度
x_image = tf.transpose(x_image, [0, 2, 3, 1])

# 第一层卷积,滤波器参数3*3*3, 32个
#bn_layer1 = Batch_Normalization(x_image, istraining, "bn1")
W_conv1 = weight_variable([3, 3, 3, 32])
b_conv1 = bias_variable([32])
h_conv1 = conv2d(x_image, W_conv1) + b_conv1
#h_conv1 = tf.layers.dropout(inputs=h_conv1, rate=droprate, training=istraining)
h_relu1 = tf.nn.relu(h_conv1) # 卷积
h_pool1 = max_pool_2x2(h_relu1) # 池化

h_pool4 = tf.reshape(h_pool1,[-1,16*16*32])
bn_layer5_flat = tf.layers.dense(inputs=h_pool4, units=10, name='linear')

cross_entropy = tf.losses.softmax_cross_entropy(onehot_labels=y_, logits=bn_layer5_flat,
reduction=tf.losses.Reduction.MEAN)
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(bn_layer5_flat, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

sess.run(tf.global_variables_initializer())
x_train = np.array(train_data[b'data']) / 255
y_train = np.array(pd.get_dummies(train_data[b'labels']))
x_test = test_data[b'data'] / 255
y_test = np.array(pd.get_dummies(test_data[b'labels']))
eplr = 1e-4;
for i in range(20000):
if i == 20000 * 0.5 or i == 20000 * 0.75:
eplr = eplr / 10
start = i * BATCH_SIZE % (50000 - BATCH_SIZE)
sess.run(train_step,feed_dict={x: x_train[start: start + BATCH_SIZE],
y_: y_train[start: start + BATCH_SIZE],
})
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={x: x_test[0: 200],
y_: y_test[0: 200]
})
loss_value = cross_entropy.eval(feed_dict={x: x_train[start: start + BATCH_SIZE],
y_: y_train[start: start + BATCH_SIZE]
})
print("step %d, trainning accuracy, %g loss %g" % (i, train_accuracy, loss_value))

test_accuracy = accuracy.eval(feed_dict={x: x_test, y_: y_test})
print("test accuracy %g" % test_accuracy)









share|improve this question




























    up vote
    0
    down vote

    favorite












    i run the coed with rtx2080 ,using docker.
    Once i call sess.run(train_step,feed_dick={}),i get "the Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)" .But if i run it with cpu ,it works well.
    i have no idea what happened .



    Using TensorFlow backend.




    2018-11-18 13:19:12.025412: I
    tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
    instructions that this TensorFlow binary was not compiled to use: AVX2
    FMA 2018-11-18 13:19:12.132999: I
    tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful
    NUMA node read from SysFS had negative value (-1), but there must be
    at least one NUMA node, so returning NUMA node zero 2018-11-18
    13:19:12.133566: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
    with properties: name: GeForce RTX 2080 major: 7 minor: 5
    memoryClockRate(GHz): 1.8 pciBusID: 0000:06:00.0 totalMemory: 7.76GiB
    freeMemory: 7.46GiB 2018-11-18 13:19:12.133584: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
    gpu devices: 0 2018-11-18 13:19:12.394726: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
    interconnect StreamExecutor with strength 1 edge matrix: 2018-11-18
    13:19:12.394763: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
    2018-11-18 13:19:12.394770: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
    2018-11-18 13:19:12.394963: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
    TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with
    7172 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080,
    pci bus id: 0000:06:00.0, compute capability: 7.5)




    import tensorflow as tf
    import numpy as np
    import pandas as pd
    from tensorflow.contrib.framework import arg_scope
    from keras.layers import Dense, Activation
    import pickle
    from tensorflow.contrib.layers import batch_norm, flatten

    train_data = {b'data': , b'labels': }
    # 加载训练数据
    for i in range(5):
    with open("data/cifar-10/data_batch_" + str(i + 1), mode='rb') as file:
    data = pickle.load(file, encoding='bytes')
    train_data[b'data'] += list(data[b'data'])
    train_data[b'labels'] += data[b'labels']
    # 加载测试数据
    with open("data/cifar-10/test_batch", mode='rb') as file:
    test_data = pickle.load(file, encoding='bytes')
    # 定义一些变量
    NUM_LABLES = 10 # 分类结果为10类
    BATCH_SIZE = 64 # 每次训练batch数

    sess = tf.InteractiveSession()


    # 权重初始化
    def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=2 / shape[0] / shape[1] / shape[2])
    # initial = tf.truncated_normal(shape, stddev=0.01)
    return tf.Variable(initial)


    # 卷积层偏置初始化为常数0.1
    def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)


    # 定义卷积操作,卷积步长为1. padding = 'SAME' 表示全0填充
    def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')


    # 定义最大池化操作,尺寸为2,步长为2,全0填充
    def max_pool_2x2(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

    # 对输入进行占位操作,输入为BATCH*3072向量,输出为BATCH*10向量
    x = tf.placeholder(tf.float32, [None, 3072])
    y_ = tf.placeholder(tf.float32, [None, NUM_LABLES])
    # 对输入进行reshape,转换成3*32*32格式
    x_image = tf.reshape(x, [-1, 3, 32, 32])
    # 转置操作,转换成滤波器做卷积所需格式:32*32*3,32*32为其二维卷积操作维度
    x_image = tf.transpose(x_image, [0, 2, 3, 1])

    # 第一层卷积,滤波器参数3*3*3, 32个
    #bn_layer1 = Batch_Normalization(x_image, istraining, "bn1")
    W_conv1 = weight_variable([3, 3, 3, 32])
    b_conv1 = bias_variable([32])
    h_conv1 = conv2d(x_image, W_conv1) + b_conv1
    #h_conv1 = tf.layers.dropout(inputs=h_conv1, rate=droprate, training=istraining)
    h_relu1 = tf.nn.relu(h_conv1) # 卷积
    h_pool1 = max_pool_2x2(h_relu1) # 池化

    h_pool4 = tf.reshape(h_pool1,[-1,16*16*32])
    bn_layer5_flat = tf.layers.dense(inputs=h_pool4, units=10, name='linear')

    cross_entropy = tf.losses.softmax_cross_entropy(onehot_labels=y_, logits=bn_layer5_flat,
    reduction=tf.losses.Reduction.MEAN)
    train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    correct_prediction = tf.equal(tf.argmax(bn_layer5_flat, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    sess.run(tf.global_variables_initializer())
    x_train = np.array(train_data[b'data']) / 255
    y_train = np.array(pd.get_dummies(train_data[b'labels']))
    x_test = test_data[b'data'] / 255
    y_test = np.array(pd.get_dummies(test_data[b'labels']))
    eplr = 1e-4;
    for i in range(20000):
    if i == 20000 * 0.5 or i == 20000 * 0.75:
    eplr = eplr / 10
    start = i * BATCH_SIZE % (50000 - BATCH_SIZE)
    sess.run(train_step,feed_dict={x: x_train[start: start + BATCH_SIZE],
    y_: y_train[start: start + BATCH_SIZE],
    })
    if i % 100 == 0:
    train_accuracy = accuracy.eval(feed_dict={x: x_test[0: 200],
    y_: y_test[0: 200]
    })
    loss_value = cross_entropy.eval(feed_dict={x: x_train[start: start + BATCH_SIZE],
    y_: y_train[start: start + BATCH_SIZE]
    })
    print("step %d, trainning accuracy, %g loss %g" % (i, train_accuracy, loss_value))

    test_accuracy = accuracy.eval(feed_dict={x: x_test, y_: y_test})
    print("test accuracy %g" % test_accuracy)









    share|improve this question


























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      i run the coed with rtx2080 ,using docker.
      Once i call sess.run(train_step,feed_dick={}),i get "the Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)" .But if i run it with cpu ,it works well.
      i have no idea what happened .



      Using TensorFlow backend.




      2018-11-18 13:19:12.025412: I
      tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
      instructions that this TensorFlow binary was not compiled to use: AVX2
      FMA 2018-11-18 13:19:12.132999: I
      tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful
      NUMA node read from SysFS had negative value (-1), but there must be
      at least one NUMA node, so returning NUMA node zero 2018-11-18
      13:19:12.133566: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
      with properties: name: GeForce RTX 2080 major: 7 minor: 5
      memoryClockRate(GHz): 1.8 pciBusID: 0000:06:00.0 totalMemory: 7.76GiB
      freeMemory: 7.46GiB 2018-11-18 13:19:12.133584: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
      gpu devices: 0 2018-11-18 13:19:12.394726: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
      interconnect StreamExecutor with strength 1 edge matrix: 2018-11-18
      13:19:12.394763: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2018-11-18 13:19:12.394770: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2018-11-18 13:19:12.394963: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
      TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with
      7172 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080,
      pci bus id: 0000:06:00.0, compute capability: 7.5)




      import tensorflow as tf
      import numpy as np
      import pandas as pd
      from tensorflow.contrib.framework import arg_scope
      from keras.layers import Dense, Activation
      import pickle
      from tensorflow.contrib.layers import batch_norm, flatten

      train_data = {b'data': , b'labels': }
      # 加载训练数据
      for i in range(5):
      with open("data/cifar-10/data_batch_" + str(i + 1), mode='rb') as file:
      data = pickle.load(file, encoding='bytes')
      train_data[b'data'] += list(data[b'data'])
      train_data[b'labels'] += data[b'labels']
      # 加载测试数据
      with open("data/cifar-10/test_batch", mode='rb') as file:
      test_data = pickle.load(file, encoding='bytes')
      # 定义一些变量
      NUM_LABLES = 10 # 分类结果为10类
      BATCH_SIZE = 64 # 每次训练batch数

      sess = tf.InteractiveSession()


      # 权重初始化
      def weight_variable(shape):
      initial = tf.truncated_normal(shape, stddev=2 / shape[0] / shape[1] / shape[2])
      # initial = tf.truncated_normal(shape, stddev=0.01)
      return tf.Variable(initial)


      # 卷积层偏置初始化为常数0.1
      def bias_variable(shape):
      initial = tf.constant(0.1, shape=shape)
      return tf.Variable(initial)


      # 定义卷积操作,卷积步长为1. padding = 'SAME' 表示全0填充
      def conv2d(x, W):
      return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')


      # 定义最大池化操作,尺寸为2,步长为2,全0填充
      def max_pool_2x2(x):
      return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

      # 对输入进行占位操作,输入为BATCH*3072向量,输出为BATCH*10向量
      x = tf.placeholder(tf.float32, [None, 3072])
      y_ = tf.placeholder(tf.float32, [None, NUM_LABLES])
      # 对输入进行reshape,转换成3*32*32格式
      x_image = tf.reshape(x, [-1, 3, 32, 32])
      # 转置操作,转换成滤波器做卷积所需格式:32*32*3,32*32为其二维卷积操作维度
      x_image = tf.transpose(x_image, [0, 2, 3, 1])

      # 第一层卷积,滤波器参数3*3*3, 32个
      #bn_layer1 = Batch_Normalization(x_image, istraining, "bn1")
      W_conv1 = weight_variable([3, 3, 3, 32])
      b_conv1 = bias_variable([32])
      h_conv1 = conv2d(x_image, W_conv1) + b_conv1
      #h_conv1 = tf.layers.dropout(inputs=h_conv1, rate=droprate, training=istraining)
      h_relu1 = tf.nn.relu(h_conv1) # 卷积
      h_pool1 = max_pool_2x2(h_relu1) # 池化

      h_pool4 = tf.reshape(h_pool1,[-1,16*16*32])
      bn_layer5_flat = tf.layers.dense(inputs=h_pool4, units=10, name='linear')

      cross_entropy = tf.losses.softmax_cross_entropy(onehot_labels=y_, logits=bn_layer5_flat,
      reduction=tf.losses.Reduction.MEAN)
      train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
      correct_prediction = tf.equal(tf.argmax(bn_layer5_flat, 1), tf.argmax(y_, 1))
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

      sess.run(tf.global_variables_initializer())
      x_train = np.array(train_data[b'data']) / 255
      y_train = np.array(pd.get_dummies(train_data[b'labels']))
      x_test = test_data[b'data'] / 255
      y_test = np.array(pd.get_dummies(test_data[b'labels']))
      eplr = 1e-4;
      for i in range(20000):
      if i == 20000 * 0.5 or i == 20000 * 0.75:
      eplr = eplr / 10
      start = i * BATCH_SIZE % (50000 - BATCH_SIZE)
      sess.run(train_step,feed_dict={x: x_train[start: start + BATCH_SIZE],
      y_: y_train[start: start + BATCH_SIZE],
      })
      if i % 100 == 0:
      train_accuracy = accuracy.eval(feed_dict={x: x_test[0: 200],
      y_: y_test[0: 200]
      })
      loss_value = cross_entropy.eval(feed_dict={x: x_train[start: start + BATCH_SIZE],
      y_: y_train[start: start + BATCH_SIZE]
      })
      print("step %d, trainning accuracy, %g loss %g" % (i, train_accuracy, loss_value))

      test_accuracy = accuracy.eval(feed_dict={x: x_test, y_: y_test})
      print("test accuracy %g" % test_accuracy)









      share|improve this question















      i run the coed with rtx2080 ,using docker.
      Once i call sess.run(train_step,feed_dick={}),i get "the Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)" .But if i run it with cpu ,it works well.
      i have no idea what happened .



      Using TensorFlow backend.




      2018-11-18 13:19:12.025412: I
      tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
      instructions that this TensorFlow binary was not compiled to use: AVX2
      FMA 2018-11-18 13:19:12.132999: I
      tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful
      NUMA node read from SysFS had negative value (-1), but there must be
      at least one NUMA node, so returning NUMA node zero 2018-11-18
      13:19:12.133566: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
      with properties: name: GeForce RTX 2080 major: 7 minor: 5
      memoryClockRate(GHz): 1.8 pciBusID: 0000:06:00.0 totalMemory: 7.76GiB
      freeMemory: 7.46GiB 2018-11-18 13:19:12.133584: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
      gpu devices: 0 2018-11-18 13:19:12.394726: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
      interconnect StreamExecutor with strength 1 edge matrix: 2018-11-18
      13:19:12.394763: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2018-11-18 13:19:12.394770: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2018-11-18 13:19:12.394963: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
      TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with
      7172 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080,
      pci bus id: 0000:06:00.0, compute capability: 7.5)




      import tensorflow as tf
      import numpy as np
      import pandas as pd
      from tensorflow.contrib.framework import arg_scope
      from keras.layers import Dense, Activation
      import pickle
      from tensorflow.contrib.layers import batch_norm, flatten

      train_data = {b'data': , b'labels': }
      # 加载训练数据
      for i in range(5):
      with open("data/cifar-10/data_batch_" + str(i + 1), mode='rb') as file:
      data = pickle.load(file, encoding='bytes')
      train_data[b'data'] += list(data[b'data'])
      train_data[b'labels'] += data[b'labels']
      # 加载测试数据
      with open("data/cifar-10/test_batch", mode='rb') as file:
      test_data = pickle.load(file, encoding='bytes')
      # 定义一些变量
      NUM_LABLES = 10 # 分类结果为10类
      BATCH_SIZE = 64 # 每次训练batch数

      sess = tf.InteractiveSession()


      # 权重初始化
      def weight_variable(shape):
      initial = tf.truncated_normal(shape, stddev=2 / shape[0] / shape[1] / shape[2])
      # initial = tf.truncated_normal(shape, stddev=0.01)
      return tf.Variable(initial)


      # 卷积层偏置初始化为常数0.1
      def bias_variable(shape):
      initial = tf.constant(0.1, shape=shape)
      return tf.Variable(initial)


      # 定义卷积操作,卷积步长为1. padding = 'SAME' 表示全0填充
      def conv2d(x, W):
      return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')


      # 定义最大池化操作,尺寸为2,步长为2,全0填充
      def max_pool_2x2(x):
      return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

      # 对输入进行占位操作,输入为BATCH*3072向量,输出为BATCH*10向量
      x = tf.placeholder(tf.float32, [None, 3072])
      y_ = tf.placeholder(tf.float32, [None, NUM_LABLES])
      # 对输入进行reshape,转换成3*32*32格式
      x_image = tf.reshape(x, [-1, 3, 32, 32])
      # 转置操作,转换成滤波器做卷积所需格式:32*32*3,32*32为其二维卷积操作维度
      x_image = tf.transpose(x_image, [0, 2, 3, 1])

      # 第一层卷积,滤波器参数3*3*3, 32个
      #bn_layer1 = Batch_Normalization(x_image, istraining, "bn1")
      W_conv1 = weight_variable([3, 3, 3, 32])
      b_conv1 = bias_variable([32])
      h_conv1 = conv2d(x_image, W_conv1) + b_conv1
      #h_conv1 = tf.layers.dropout(inputs=h_conv1, rate=droprate, training=istraining)
      h_relu1 = tf.nn.relu(h_conv1) # 卷积
      h_pool1 = max_pool_2x2(h_relu1) # 池化

      h_pool4 = tf.reshape(h_pool1,[-1,16*16*32])
      bn_layer5_flat = tf.layers.dense(inputs=h_pool4, units=10, name='linear')

      cross_entropy = tf.losses.softmax_cross_entropy(onehot_labels=y_, logits=bn_layer5_flat,
      reduction=tf.losses.Reduction.MEAN)
      train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
      correct_prediction = tf.equal(tf.argmax(bn_layer5_flat, 1), tf.argmax(y_, 1))
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

      sess.run(tf.global_variables_initializer())
      x_train = np.array(train_data[b'data']) / 255
      y_train = np.array(pd.get_dummies(train_data[b'labels']))
      x_test = test_data[b'data'] / 255
      y_test = np.array(pd.get_dummies(test_data[b'labels']))
      eplr = 1e-4;
      for i in range(20000):
      if i == 20000 * 0.5 or i == 20000 * 0.75:
      eplr = eplr / 10
      start = i * BATCH_SIZE % (50000 - BATCH_SIZE)
      sess.run(train_step,feed_dict={x: x_train[start: start + BATCH_SIZE],
      y_: y_train[start: start + BATCH_SIZE],
      })
      if i % 100 == 0:
      train_accuracy = accuracy.eval(feed_dict={x: x_test[0: 200],
      y_: y_test[0: 200]
      })
      loss_value = cross_entropy.eval(feed_dict={x: x_train[start: start + BATCH_SIZE],
      y_: y_train[start: start + BATCH_SIZE]
      })
      print("step %d, trainning accuracy, %g loss %g" % (i, train_accuracy, loss_value))

      test_accuracy = accuracy.eval(feed_dict={x: x_test, y_: y_test})
      print("test accuracy %g" % test_accuracy)






      tensorflow sigsegv






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 19 at 6:48









      Kzrystof

      1,72621221




      1,72621221










      asked Nov 18 at 13:32









      先生林

      12




      12





























          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53361417%2ftensorflow-run-any-opmtimizer-get-exit-code-139-interrupted-by-signal-11-sigseg%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53361417%2ftensorflow-run-any-opmtimizer-get-exit-code-139-interrupted-by-signal-11-sigseg%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          If I really need a card on my start hand, how many mulligans make sense? [duplicate]

          Alcedinidae

          Can an atomic nucleus contain both particles and antiparticles? [duplicate]