亚洲免费在线视频-亚洲啊v-久久免费精品视频-国产精品va-看片地址-成人在线视频网

您的位置:首頁(yè)技術(shù)文章
文章詳情頁(yè)

python - tensorflow中TFRecord是怎么用的?

瀏覽:110日期:2022-07-23 15:52:38

問題描述

怎么把下面的代碼中的mnist數(shù)據(jù)集換成TFRecord

假設(shè)TFRecord數(shù)據(jù)集已經(jīng)準(zhǔn)備好,train.tfrecords 和 test.tfrecords 都在當(dāng)前py的目錄下

已經(jīng)有TFRecord的讀取代碼。

def read_and_decode(filename): filename_queue = tf.train.string_input_producer([filename]) reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) features = tf.parse_single_example(serialized_example, features={ ’label’: tf.FixedLenFeature([], tf.int64), ’img_raw’: tf.FixedLenFeature([], tf.string), }) img = tf.decode_raw(features[’img_raw’], tf.uint8) img = tf.reshape(img, [512, 288, 3]) img = tf.cast(img, tf.float32) * (1. / 255) - 0.5 label = tf.cast(features[’label’], tf.int32) return img, label

from tensorflow.examples.tutorials.mnist import input_dataimport tensorflow as tfmnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)# Parameterslearning_rate = 0.001training_iters = 200000batch_size = 64display_step = 20# Network Parametersn_input = 784 # MNIST data input (img shape: 28*28)n_classes = 10 # MNIST total classes (0-9 digits)dropout = 0.75 # Dropout, probability to keep units# tf Graph inputx = tf.placeholder(tf.float32, [None, n_input])y = tf.placeholder(tf.float32, [None, n_classes])keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)def init_weights(shape): return tf.Variable(tf.random_normal(shape, stddev=0.01))# Create custom modeldef conv2d(name, l_input, w, b): return tf.nn.relu(tf.nn.bias_add(tf.nn.conv2d(l_input, w, strides=[1, 1, 1, 1], padding=’SAME’), b), name=name)def max_pool(name, l_input, k): return tf.nn.max_pool(l_input, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding=’SAME’, name=name)def norm(name, l_input, lsize=4): return tf.nn.lrn(l_input, lsize, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name=name)def dnn(_x, _weights, _biases, _dropout): _x = tf.nn.dropout(_x, _dropout) d1 = tf.nn.relu(tf.nn.bias_add(tf.matmul(_x, _weights[’wd1’]), _biases[’bd1’]), name='d1') d2x = tf.nn.dropout(d1, _dropout) d2 = tf.nn.relu(tf.nn.bias_add(tf.matmul(d2x, _weights[’wd2’]), _biases[’bd2’]), name='d2') dout = tf.nn.dropout(d2, _dropout) out = tf.matmul(dout, _weights[’out’]) + _biases[’out’] return out# Store layers weight & biasweights = { ’wd1’: tf.Variable(tf.random_normal([784, 600], stddev=0.01)), ’wd2’: tf.Variable(tf.random_normal([600, 480], stddev=0.01)), ’out’: tf.Variable(tf.random_normal([480, 10]))}biases = { ’bd1’: tf.Variable(tf.random_normal([600])), ’bd2’: tf.Variable(tf.random_normal([480])), ’out’: tf.Variable(tf.random_normal([10]))}# Construct modelpred = dnn(x, weights, biases, keep_prob)# Define loss and optimizercost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)# Evaluate modelcorrect_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))# Initializing the variablesinit = tf.global_variables_initializer()#tf.summary.scalar('loss', cost)tf.summary.scalar('accuracy', accuracy)# Merge all summaries to a single operatormerged_summary_op = tf.summary.merge_all()# Launch the graphwith tf.Session() as sess: sess.run(init) summary_writer = tf.summary.FileWriter(’/tmp/logs/ex12_dnn’, graph=sess.graph) step = 1 # Keep training until reach max iterations while step * batch_size < training_iters:batch_xs, batch_ys = mnist.train.next_batch(batch_size)# Fit training using batch datasess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys, keep_prob: dropout})if step % display_step == 0: # Calculate batch accuracy acc = sess.run(accuracy, feed_dict={x: batch_xs, y: batch_ys, keep_prob: 1.}) # Calculate batch loss loss = sess.run(cost, feed_dict={x: batch_xs, y: batch_ys, keep_prob: 1.}) print('Iter ' + str(step * batch_size) + ', Minibatch Loss= ' + '{:.6f}'.format(loss) + ', Training Accuracy= ' + '{:.5f}'.format(acc)) summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys, keep_prob: 1.}) summary_writer.add_summary(summary_str, step)step += 1 print('Optimization Finished!') # Calculate accuracy for 256 mnist test images print('Testing Accuracy:', sess.run(accuracy, feed_dict={x: mnist.test.images[:256], y: mnist.test.labels[:256], keep_prob: 1.})) # 98%

不知道具體怎么使用, 改了幾次執(zhí)行都報(bào)錯(cuò)

錯(cuò)誤類似

ValueError: Only call `softmax_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...)

問題解答

回答1:

不知道是否理解你的意思,這段代碼mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)讀取的就是mnist數(shù)據(jù),你把它換掉,然后在使用TFRecord的讀取代碼讀取TFRecord數(shù)據(jù),將下面訓(xùn)練網(wǎng)絡(luò)的代碼中的mnist也換掉,同時(shí)確保你使用的卷積操作參數(shù)要和TFRecord數(shù)據(jù)對(duì)應(yīng)。

標(biāo)簽: Python 編程
相關(guān)文章:
主站蜘蛛池模板: 中文字幕在线看视频一区二区三区 | 精品9e精品视频在线观看 | 韩国一级毛片大全女教师 | 欧美成在人线a免费 | 香港aa三级久久三级 | 特黄特色三级在线播放 | 成人一级视频 | 日本九六视频 | 国产成人经典三级在线观看 | 国产精品线在线精品国语 | 性色a v 一区 | 国产女厕偷窥系列在线视频 | 亚洲 欧美 激情 另类 校园 | 欧美成人亚洲高清在线观看 | 欧美成人鲁丝片在线观看 | 免费观看一级欧美大 | 亚洲综合一区二区精品久久 | 国产精品成人一区二区不卡 | 国产色手机在线观看播放 | 国产欧美日本在线观看 | 免费看一毛一级毛片视频 | 欧美成人精品久久精品 | 一级毛片免费观看 | 亚洲 欧美 在线观看 | 欧美特级 | 免费毛片全部不收费的 | 男人看片网址 | 中文字幕亚洲精品久久 | 深夜福利视频在线观看 | 日本不卡在线一区二区三区视频 | 91视频国产91久久久 | 99久久精品费精品国产一区二区 | a级男女性高爱潮高清试 | 国产成人精品日本亚洲语音2 | 亚洲人成免费网站 | 国产精品高清在线 | 视频一二三区 | 99ri在线观看 | 成年人网站免费在线观看 | 日本一区二区高清免费不卡 | 久久青草免费线观最新 |