Implementação de Camada incorporado para CNN

votos
0

Oi lá Eu tenho tentado seguir os 2 exemplos:

1) Como fazer a classificação de texto com CNNs, TensorFlow e palavra incorporação

2) Como a construir uma camada de embebimento em Tensorflow RNN?

No entanto, eu fui correndo em seguinte erro para o exemplo 1,

Input 0 of layer conv2d_1 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 100, 20]

e o seguinte erro por exemplo dois.

ValueError: Layer conv2d_1 expects 1 inputs, but it received 100 input tensors. Inputs received: [<tf.Tensor 'unstack:0' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:1' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:2' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:3' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:4' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:5' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:6' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:7' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:8' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:9' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:10' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:11' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:12' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:13' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:14' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:15' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:16' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:17' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:18' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:19' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:20' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:21' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:22' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:23' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:24' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:25' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:26' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:27' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:28' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:29' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:30' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:31' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:32' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:33' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:34' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:35' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:36' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:37' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:38' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:39' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:40' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:41' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:42' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:43' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:44' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:45' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:46' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:47' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:48' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:49' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:50' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:51' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:52' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:53' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:54' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:55' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:56' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:57' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:58' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:59' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:60' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:61' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:62' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:63' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:64' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:65' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:66' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:67' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:68' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:69' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:70' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:71' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:72' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:73' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:74' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:75' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:76' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:77' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:78' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:79' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:80' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:81' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:82' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:83' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:84' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:85' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:86' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:87' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:88' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:89' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:90' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:91' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:92' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:93' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:94' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:95' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:96' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:97' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:98' shape=(?, 20) dtype=float32>, <tf.Tensor 'unstack:99' shape=(?, 20) dtype=float32>]

Tenho alterar o código em vez de usar word_list = tf.unstack(word_vectors, axis=1)Eu mudei para a seguinte implementação de contornar os erros, mas eu não sei se a implementação está correta.

word_vectors = tf.contrib.layers.embed_sequence(
    x, vocab_size=n_words, embed_dim=EMBEDDING_SIZE)

word_list = tf.reshape( word_vectors, [-1, MAX_DOCUMENT_LENGTH, EMBEDDING_SIZE, 1])

conv1, pool1 = cnn(word_list, FILTER_SHAPE1, 'CNN_Layer1')

A função cnne os outros parâmetros são como definidos

MAX_DOCUMENT_LENGTH = 100
N_FILTERS = 10
FILTER_SHAPE1 = [20, 20]
POOLING_WINDOW = 4
POOLING_STRIDE = 2
MAX_LABEL = 15
EMBEDDING_SIZE = 20

def cnn(x, filter_shape, name):
    with tf.variable_scope(name):
        conv1 = tf.layers.conv2d(
            x,
            filters=N_FILTERS,
            kernel_size=filter_shape,
            padding='VALID',
            activation=tf.nn.relu)
        pool1 = tf.layers.max_pooling2d(
            conv1,
            pool_size=POOLING_WINDOW,
            strides=POOLING_STRIDE,
            padding='SAME')

    return conv1, pool1
Publicado 20/10/2018 em 13:52
fonte usuário
Em outras línguas...                            

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more