您当前的位置:首页 > IT编程 > Keras
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch | 异常检测 | Transformers | 情感分类 | 知识图谱 |

自学教程:Python layers.CuDNNGRU方法代码示例

51自学网 2020-12-01 11:09:10
  Keras
这篇教程Python layers.CuDNNGRU方法代码示例写得很实用,希望能帮到您。

本文整理汇总了Python中keras.layers.CuDNNGRU方法的典型用法代码示例。如果您正苦于以下问题:Python layers.CuDNNGRU方法的具体用法?Python layers.CuDNNGRU怎么用?Python layers.CuDNNGRU使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块keras.layers的用法示例。

在下文中一共展示了layers.CuDNNGRU方法的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: get_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def get_model(embedding_matrix, sequence_length, dropout_rate, recurrent_units, dense_size):    input_layer = Input(shape=(sequence_length,))    embedding_layer = Embedding(embedding_matrix.shape[0], embedding_matrix.shape[1],                                weights=[embedding_matrix], trainable=False)(input_layer)    x = Bidirectional(CuDNNGRU(recurrent_units, return_sequences=True))(embedding_layer)    x = Dropout(dropout_rate)(x)    x = Bidirectional(CuDNNGRU(recurrent_units, return_sequences=False))(x)    x = Dense(dense_size, activation="relu")(x)    output_layer = Dense(6, activation="sigmoid")(x)    model = Model(inputs=input_layer, outputs=output_layer)    model.compile(loss='binary_crossentropy',                  optimizer=RMSprop(clipvalue=1, clipnorm=1),                  metrics=['accuracy'])    return model 
开发者ID:PavelOstyakov,项目名称:toxic,代码行数:18,代码来源:model.py


示例2: create_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def create_model(self, hyper_parameters):        """            构建神经网络        :param hyper_parameters:json,  hyper parameters of network        :return: tensor, moedl        """        super().create_model(hyper_parameters)        x = self.word_embedding.output        # x = Reshape((self.len_max, self.embed_size, 1))(embedding)        if self.rnn_type=="LSTM":                layer_cell = LSTM        elif self.rnn_type=="GRU":                layer_cell = GRU        elif self.rnn_type=="CuDNNLSTM":                layer_cell = CuDNNLSTM        elif self.rnn_type=="CuDNNGRU":                layer_cell = CuDNNGRU        else:            layer_cell = GRU        # Bi-LSTM        for nrl in range(self.num_rnn_layers):            x = Bidirectional(layer_cell(units=self.rnn_units,                                         return_sequences=True,                                         activation='relu',                                         kernel_regularizer=regularizers.l2(0.32 * 0.1),                                         recurrent_regularizer=regularizers.l2(0.32)                                         ))(x)            x = Dropout(self.dropout)(x)        x = Flatten()(x)        # 最后就是softmax        dense_layer = Dense(self.label, activation=self.activate_classify)(x)        output = [dense_layer]        self.model = Model(self.word_embedding.input, output)        self.model.summary(120) 
开发者ID:yongzhuo,项目名称:Keras-TextClassification,代码行数:37,代码来源:graph.py


示例3: Archi_3GRU16BI_1FC256

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def Archi_3GRU16BI_1FC256(X, nbclasses):		#-- get the input sizes	m, L, depth = X.shape	input_shape = (L,depth)		#-- parameters of the architecture	l2_rate = 1.e-6	dropout_rate = 0.5	nb_rnn = 3	nbunits_rnn = 16	nbunits_fc = 256		# Define the input placeholder.	X_input = Input(input_shape)			#-- nb_conv CONV layers	X = X_input	for add in range(nb_rnn-1):		X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=True))(X)		X = Dropout(dropout_rate)(X)	X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=False))(X)	X = Dropout(dropout_rate)(X)	#-- 1 FC layers	X = fc_bn_relu_drop(X, nbunits=nbunits_fc, kernel_regularizer=l2(l2_rate), dropout_rate=dropout_rate)			#-- SOFTMAX layer	out = softmax(X, nbclasses, kernel_regularizer=l2(l2_rate))			# Create model.	return Model(inputs = X_input, outputs = out, name='Archi_3GRU16BI_1FC256')	#----------------------------------------------------------------------- 
开发者ID:charlotte-pel,项目名称:temporalCNN,代码行数:35,代码来源:architecture_rnn.py


示例4: Archi_3GRU32BI_1FC256

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def Archi_3GRU32BI_1FC256(X, nbclasses):		#-- get the input sizes	m, L, depth = X.shape	input_shape = (L,depth)		#-- parameters of the architecture	l2_rate = 1.e-6	dropout_rate = 0.5	#~ dropout_rate = 0.5	nb_rnn = 3	nbunits_rnn = 32	nbunits_fc = 256		# Define the input placeholder.	X_input = Input(input_shape)			#-- nb_conv CONV layers	X = X_input	for add in range(nb_rnn-1):		X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=True))(X)		X = Dropout(dropout_rate)(X)	X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=False))(X)	X = Dropout(dropout_rate)(X)	#-- 1 FC layers	X = fc_bn_relu_drop(X, nbunits=nbunits_fc, kernel_regularizer=l2(l2_rate), dropout_rate=dropout_rate)			#-- SOFTMAX layer	out = softmax(X, nbclasses, kernel_regularizer=l2(l2_rate))			# Create model.	return Model(inputs = X_input, outputs = out, name='Archi_3GRU32BI_1FC256')#----------------------------------------------------------------------- 
开发者ID:charlotte-pel,项目名称:temporalCNN,代码行数:36,代码来源:architecture_rnn.py


示例5: Archi_3GRU64BI_1FC256

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def Archi_3GRU64BI_1FC256(X, nbclasses):		#-- get the input sizes	m, L, depth = X.shape	input_shape = (L,depth)		#-- parameters of the architecture	l2_rate = 1.e-6	dropout_rate = 0.5	nb_rnn = 3	nbunits_rnn = 64	nbunits_fc = 256		# Define the input placeholder.	X_input = Input(input_shape)			#-- nb_conv CONV layers	X = X_input	for add in range(nb_rnn-1):		X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=True))(X)		X = Dropout(dropout_rate)(X)	X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=False))(X)	X = Dropout(dropout_rate)(X)	#-- 1 FC layers	X = fc_bn_relu_drop(X, nbunits=nbunits_fc, kernel_regularizer=l2(l2_rate), dropout_rate=dropout_rate)			#-- SOFTMAX layer	out = softmax(X, nbclasses, kernel_regularizer=l2(l2_rate))			# Create model.	return Model(inputs = X_input, outputs = out, name='Archi_3GRU64BI_1FC256')	#----------------------------------------------------------------------- 
开发者ID:charlotte-pel,项目名称:temporalCNN,代码行数:36,代码来源:architecture_rnn.py


示例6: Archi_3GRU128BI_1FC256

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def Archi_3GRU128BI_1FC256(X, nbclasses):		#-- get the input sizes	m, L, depth = X.shape	input_shape = (L,depth)		#-- parameters of the architecture	l2_rate = 1.e-6	dropout_rate = 0.5	nb_rnn = 3	nbunits_rnn = 128	nbunits_fc = 256		# Define the input placeholder.	X_input = Input(input_shape)			#-- nb_conv CONV layers	X = X_input	for add in range(nb_rnn-1):		X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=True))(X)		X = Dropout(dropout_rate)(X)	X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=False))(X)	X = Dropout(dropout_rate)(X)	#-- 1 FC layers	X = fc_bn_relu_drop(X, nbunits=nbunits_fc, kernel_regularizer=l2(l2_rate), dropout_rate=dropout_rate)			#-- SOFTMAX layer	out = softmax(X, nbclasses, kernel_regularizer=l2(l2_rate))			# Create model.	return Model(inputs = X_input, outputs = out, name='Archi_3GRU128BI_1FC256')		#----------------------------------------------------------------------- 
开发者ID:charlotte-pel,项目名称:temporalCNN,代码行数:36,代码来源:architecture_rnn.py


示例7: Archi_3GRU256BI_1FC256

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def Archi_3GRU256BI_1FC256(X, nbclasses):		#-- get the input sizes	m, L, depth = X.shape	input_shape = (L,depth)		#-- parameters of the architecture	l2_rate = 1.e-6	dropout_rate = 0.5	nb_rnn = 3	nbunits_rnn = 256	nbunits_fc = 256		# Define the input placeholder.	X_input = Input(input_shape)			#-- nb_conv CONV layers	X = X_input	for add in range(nb_rnn-1):		X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=True))(X)		X = Dropout(dropout_rate)(X)	X = Bidirectional(CuDNNGRU(nbunits_rnn, return_sequences=False))(X)	X = Dropout(dropout_rate)(X)	#-- 1 FC layers	X = fc_bn_relu_drop(X, nbunits=nbunits_fc, kernel_regularizer=l2(l2_rate), dropout_rate=dropout_rate)			#-- SOFTMAX layer	out = softmax(X, nbclasses, kernel_regularizer=l2(l2_rate))			# Create model.	return Model(inputs = X_input, outputs = out, name='Archi_3GRU256BI_1FC256')	#--------------------- Switcher for running the architectures 
开发者ID:charlotte-pel,项目名称:temporalCNN,代码行数:35,代码来源:architecture_rnn.py


示例8: create_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def create_model(self):                dat_input = Input(shape=(self.tdatlen,))        com_input = Input(shape=(self.comlen,))        sml_input = Input(shape=(self.smllen,))                ee = Embedding(output_dim=self.embdims, input_dim=self.tdatvocabsize, mask_zero=False)(dat_input)        se = Embedding(output_dim=self.smldims, input_dim=self.smlvocabsize, mask_zero=False)(sml_input)        se_enc = CuDNNGRU(self.recdims, return_state=True, return_sequences=True)        seout, state_sml = se_enc(se)        enc = CuDNNGRU(self.recdims, return_state=True, return_sequences=True)        encout, state_h = enc(ee, initial_state=state_sml)                de = Embedding(output_dim=self.embdims, input_dim=self.comvocabsize, mask_zero=False)(com_input)        dec = CuDNNGRU(self.recdims, return_sequences=True)        decout = dec(de, initial_state=state_h)        attn = dot([decout, encout], axes=[2, 2])        attn = Activation('softmax')(attn)        context = dot([attn, encout], axes=[2, 1])        ast_attn = dot([decout, seout], axes=[2, 2])        ast_attn = Activation('softmax')(ast_attn)        ast_context = dot([ast_attn, seout], axes=[2, 1])        context = concatenate([context, decout, ast_context])        out = TimeDistributed(Dense(self.recdims, activation="relu"))(context)        out = Flatten()(out)        out = Dense(self.comvocabsize, activation="softmax")(out)                model = Model(inputs=[dat_input, com_input, sml_input], outputs=out)        if self.config['multigpu']:            model = keras.utils.multi_gpu_model(model, gpus=2)                model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])        return self.config, model 
开发者ID:mcmillco,项目名称:funcom,代码行数:43,代码来源:ast_attendgru_xtra.py


示例9: CNN_BIGRU

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def CNN_BIGRU():    # Inp is one-hot encoded version of inp_alt    inp          = Input(shape=(maxlen_seq, n_words))    inp_alt      = Input(shape=(maxlen_seq,))    inp_profiles = Input(shape=(maxlen_seq, 22))    # Concatenate embedded and unembedded input    x_emb = Embedding(input_dim=n_words, output_dim=64,                       input_length=maxlen_seq)(inp_alt)    x = Concatenate(axis=-1)([inp, x_emb, inp_profiles])    x = super_conv_block(x)    x = conv_block(x)    x = super_conv_block(x)    x = conv_block(x)    x = super_conv_block(x)    x = conv_block(x)    x = Bidirectional(CuDNNGRU(units = 256, return_sequences = True, recurrent_regularizer=l2(0.2)))(x)    x = TimeDistributed(Dropout(0.5))(x)    x = TimeDistributed(Dense(256, activation = "relu"))(x)    x = TimeDistributed(Dropout(0.5))(x)        y = TimeDistributed(Dense(n_tags, activation = "softmax"))(x)        model = Model([inp, inp_alt, inp_profiles], y)        return model 
开发者ID:idrori,项目名称:cu-ssp,代码行数:30,代码来源:model_1.py


示例10: build_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def build_model():  input = Input(shape = (None, ))  profiles_input = Input(shape = (None, 22))  # Defining an embedding layer mapping from the words (n_words) to a vector of len 128  x1 = Embedding(input_dim = n_words, output_dim = 250, input_length = None)(input)    x1 = concatenate([x1, profiles_input], axis = 2)    x2 = Embedding(input_dim = n_words, output_dim = 125, input_length = None)(input)  x2 = concatenate([x2, profiles_input], axis = 2)  x1 = Dense(1200, activation = "relu")(x1)  x1 = Dropout(0.5)(x1)  # Defining a bidirectional LSTM using the embedded representation of the inputs  x2 = Bidirectional(CuDNNGRU(units = 500, return_sequences = True))(x2)  x2 = Bidirectional(CuDNNGRU(units = 100, return_sequences = True))(x2)  COMBO_MOVE = concatenate([x1, x2])  w = Dense(500, activation = "relu")(COMBO_MOVE) # try 500  w = Dropout(0.4)(w)  w = tcn.TCN()(w)  y = TimeDistributed(Dense(n_tags, activation = "softmax"))(w)  # Defining the model as a whole and printing the summary  model = Model([input, profiles_input], y)  #model.summary()  # Setting up the model with categorical x-entropy loss and the custom accuracy function as accuracy  adamOptimizer = Adam(lr=0.0025, beta_1=0.8, beta_2=0.8, epsilon=None, decay=0.0001, amsgrad=False)   model.compile(optimizer = adamOptimizer, loss = "categorical_crossentropy", metrics = ["accuracy", accuracy])  return model# Defining the decoders so that we can 
开发者ID:idrori,项目名称:cu-ssp,代码行数:36,代码来源:model_3.py


示例11: create_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def create_model(self, hyper_parameters):        """            构建神经网络        :param hyper_parameters:json,  hyper parameters of network        :return: tensor, moedl        """        super().create_model(hyper_parameters)        x = self.word_embedding.output        embedding_output_spatial = SpatialDropout1D(self.dropout_spatial)(x)        if self.rnn_units=="LSTM":                layer_cell = LSTM        elif self.rnn_units=="GRU":                layer_cell = GRU        elif self.rnn_units=="CuDNNLSTM":                layer_cell = CuDNNLSTM        elif self.rnn_units=="CuDNNGRU":                layer_cell = CuDNNGRU        else:            layer_cell = GRU        # CNN        convs = []        for kernel_size in self.filters:            conv = Conv1D(self.filters_num,                            kernel_size=kernel_size,                            strides=1,                            padding='SAME',                            kernel_regularizer=regularizers.l2(self.l2),                            bias_regularizer=regularizers.l2(self.l2),                            )(embedding_output_spatial)            convs.append(conv)        x = Concatenate(axis=1)(convs)        # Bi-LSTM, 论文中使用的是LSTM        x = Bidirectional(layer_cell(units=self.rnn_units,                                     return_sequences=True,                                     activation='relu',                                     kernel_regularizer=regularizers.l2(self.l2),                                     recurrent_regularizer=regularizers.l2(self.l2)                                     ))(x)        x = Dropout(self.dropout)(x)        x = Flatten()(x)        # 最后就是softmax        dense_layer = Dense(self.label, activation=self.activate_classify)(x)        output = [dense_layer]        self.model = Model(self.word_embedding.input, output)        self.model.summary(120) 
开发者ID:yongzhuo,项目名称:Keras-TextClassification,代码行数:48,代码来源:graph.py


示例12: create_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import CuDNNGRU [as 别名]def create_model(self, hyper_parameters):        """            构建神经网络, a bit like RCNN, R        :param hyper_parameters:json,  hyper parameters of network        :return: tensor, moedl        """        super().create_model(hyper_parameters)        x = self.word_embedding.output        x = Activation('tanh')(x)        # entire embedding channels are dropped out instead of the        # normal Keras embedding dropout, which drops all channels for entire words        # many of the datasets contain so few words that losing one or more words can alter the emotions completely        x = SpatialDropout1D(self.dropout_spatial)(x)        if self.rnn_units=="LSTM":                layer_cell = LSTM        elif self.rnn_units=="GRU":                layer_cell = GRU        elif self.rnn_units=="CuDNNLSTM":                layer_cell = CuDNNLSTM        elif self.rnn_units=="CuDNNGRU":                layer_cell = CuDNNGRU        else:            layer_cell = GRU        # skip-connection from embedding to output eases gradient-flow and allows access to lower-level features        # ordering of the way the merge is done is important for consistency with the pretrained model        lstm_0_output = Bidirectional(layer_cell(units=self.rnn_units,                                                 return_sequences=True,                                                 activation='relu',                                                 kernel_regularizer=regularizers.l2(self.l2),                                                 recurrent_regularizer=regularizers.l2(self.l2)                                                 ), name="bi_lstm_0")(x)        lstm_1_output = Bidirectional(layer_cell(units=self.rnn_units,                                                 return_sequences=True,                                                 activation='relu',                                                 kernel_regularizer=regularizers.l2(self.l2),                                                 recurrent_regularizer=regularizers.l2(self.l2)                                                 ), name="bi_lstm_1")(lstm_0_output)        x = concatenate([lstm_1_output, lstm_0_output, x])        # if return_attention is True in AttentionWeightedAverage, an additional tensor        # representing the weight at each timestep is returned        weights = None        x = AttentionWeightedAverage(name='attlayer', return_attention=self.return_attention)(x)        if self.return_attention:            x, weights = x        x = Dropout(self.dropout)(x)        # x = Flatten()(x)        # 最后就是softmax        dense_layer = Dense(self.label, activation=self.activate_classify)(x)        output = [dense_layer]        self.model = Model(self.word_embedding.input, output)        self.model.summary(120) 
开发者ID:yongzhuo,项目名称:Keras-TextClassification,代码行数:59,代码来源:graph.py


万事OK自学网:51自学网_软件自学网_CAD自学网自学excel、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。