您当前的位置:首页 > IT编程 > Keras
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch | 异常检测 | Transformers | 情感分类 | 知识图谱 |

自学教程:Python layers.Conv1D方法代码示例

51自学网 2020-12-01 11:08:53
  Keras
这篇教程Python layers.Conv1D方法代码示例写得很实用,希望能帮到您。

本文整理汇总了Python中keras.layers.Conv1D方法的典型用法代码示例。如果您正苦于以下问题:Python layers.Conv1D方法的具体用法?Python layers.Conv1D怎么用?Python layers.Conv1D使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块keras.layers的用法示例。

在下文中一共展示了layers.Conv1D方法的29个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: create_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def create_model(time_window_size, metric):        model = Sequential()        model.add(Conv1D(filters=256, kernel_size=5, padding='same', activation='relu',                         input_shape=(time_window_size, 1)))        model.add(MaxPooling1D(pool_size=4))        model.add(LSTM(64))        model.add(Dense(units=time_window_size, activation='linear'))        model.compile(optimizer='adam', loss='mean_squared_error', metrics=[metric])        # model.compile(optimizer='adam', loss='mean_squared_error', metrics=[metric])        # model.compile(optimizer="sgd", loss="mse", metrics=[metric])        print(model.summary())        return model 
开发者ID:chen0040,项目名称:keras-anomaly-detection,代码行数:20,代码来源:recurrent.py


示例2: CausalCNN

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def CausalCNN(n_filters, lr, decay, loss,                seq_len, input_features,                strides_len, kernel_size,               dilation_rates):    inputs = Input(shape=(seq_len, input_features), name='input_layer')       x=inputs    for dilation_rate in dilation_rates:        x = Conv1D(filters=n_filters,               kernel_size=kernel_size,                padding='causal',               dilation_rate=dilation_rate,               activation='linear')(x)         x = BatchNormalization()(x)        x = Activation('relu')(x)    #x = Dense(7, activation='relu', name='dense_layer')(x)    outputs = Dense(3, activation='sigmoid', name='output_layer')(x)    causalcnn = Model(inputs, outputs=[outputs])    return causalcnn 
开发者ID:BruceBinBoxing,项目名称:Deep_Learning_Weather_Forecasting,代码行数:23,代码来源:weather_model.py


示例3: create_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def create_model():    inputs = Input(shape=(length,), dtype='int32', name='inputs')    embedding_1 = Embedding(len(vocab), EMBED_DIM, input_length=length, mask_zero=True)(inputs)    bilstm = Bidirectional(LSTM(EMBED_DIM // 2, return_sequences=True))(embedding_1)    bilstm_dropout = Dropout(DROPOUT_RATE)(bilstm)    embedding_2 = Embedding(len(vocab), EMBED_DIM, input_length=length)(inputs)    con = Conv1D(filters=FILTERS, kernel_size=2 * HALF_WIN_SIZE + 1, padding='same')(embedding_2)    con_d = Dropout(DROPOUT_RATE)(con)    dense_con = TimeDistributed(Dense(DENSE_DIM))(con_d)    rnn_cnn = concatenate([bilstm_dropout, dense_con], axis=2)    dense = TimeDistributed(Dense(len(chunk_tags)))(rnn_cnn)    crf = CRF(len(chunk_tags), sparse_target=True)    crf_output = crf(dense)    model = Model(input=[inputs], output=[crf_output])    model.compile(loss=crf.loss_function, optimizer=Adam(), metrics=[crf.accuracy])    return model 
开发者ID:jtyoui,项目名称:Jtyoui,代码行数:18,代码来源:cnn_rnn_crf.py


示例4: ann_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def ann_model(input_shape):    inp = Input(shape=input_shape, name='mfcc_in')    model = inp    model = Conv1D(filters=12, kernel_size=(3), activation='relu')(model)    model = Conv1D(filters=12, kernel_size=(3), activation='relu')(model)    model = Flatten()(model)    model = Dense(56)(model)    model = Activation('relu')(model)    model = BatchNormalization()(model)    model = Dropout(0.2)(model)    model = Dense(28)(model)    model = Activation('relu')(model)    model = BatchNormalization()(model)    model = Dense(1)(model)    model = Activation('sigmoid')(model)    model = Model(inp, model)    return model 
开发者ID:tympanix,项目名称:subsync,代码行数:24,代码来源:train_ann.py


示例5: DiscriminatorConv

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def DiscriminatorConv(V, E, filter_sizes, num_filters, dropout):    '''    Another Discriminator model, currently unused because keras don't support    masking for Conv1D and it does huge influence on training.    # Arguments:        V: int, Vocabrary size        E: int, Embedding size        filter_sizes: list of int, list of each Conv1D filter sizes        num_filters: list of int, list of each Conv1D num of filters        dropout: float    # Returns:        discriminator: keras model            input: word ids, shape = (B, T)            output: probability of true data or not, shape = (B, 1)    '''    input = Input(shape=(None,), dtype='int32', name='Input')   # (B, T)    out = Embedding(V, E, name='Embedding')(input)  # (B, T, E)    out = VariousConv1D(out, filter_sizes, num_filters)    out = Highway(out, num_layers=1)    out = Dropout(dropout, name='Dropout')(out)    out = Dense(1, activation='sigmoid', name='FC')(out)    discriminator = Model(input, out)    return discriminator 
开发者ID:tyo-yo,项目名称:SeqGAN,代码行数:26,代码来源:models.py


示例6: VariousConv1D

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def VariousConv1D(x, filter_sizes, num_filters, name_prefix=''):    '''    Layer wrapper function for various filter sizes Conv1Ds    # Arguments:        x: tensor, shape = (B, T, E)        filter_sizes: list of int, list of each Conv1D filter sizes        num_filters: list of int, list of each Conv1D num of filters        name_prefix: str, layer name prefix    # Returns:        out: tensor, shape = (B, sum(num_filters))    '''    conv_outputs = []    for filter_size, n_filter in zip(filter_sizes, num_filters):        conv_name = '{}VariousConv1D/Conv1D/filter_size_{}'.format(name_prefix, filter_size)        pooling_name = '{}VariousConv1D/MaxPooling/filter_size_{}'.format(name_prefix, filter_size)        conv_out = Conv1D(n_filter, filter_size, name=conv_name)(x)   # (B, time_steps, n_filter)        conv_out = GlobalMaxPooling1D(name=pooling_name)(conv_out) # (B, n_filter)        conv_outputs.append(conv_out)    concatenate_name = '{}VariousConv1D/Concatenate'.format(name_prefix)    out = Concatenate(name=concatenate_name)(conv_outputs)    return out 
开发者ID:tyo-yo,项目名称:SeqGAN,代码行数:23,代码来源:models.py


示例7: construct_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def construct_model(classe_nums):    model = Sequential()    model.add(        Conv1D(filters=256, kernel_size=3, strides=1, activation='relu', input_shape=(99, 40), name='block1_conv1'))    model.add(MaxPool1D(pool_size=2, name='block1_pool1'))    model.add(BatchNormalization(momentum=0.9, epsilon=1e-5, axis=1))    model.add(Conv1D(filters=256, kernel_size=3, strides=1, activation='relu', name='block1_conv2'))    model.add(MaxPool1D(pool_size=2, name='block1_pool2'))    model.add(Flatten(name='block1_flat1'))    model.add(Dropout(0.5, name='block1_drop1'))    model.add(Dense(512, activation='relu', name='block2_dense2'))    model.add(MaxoutDense(512, nb_feature=4, name="block2_maxout2"))    model.add(Dropout(0.5, name='block2_drop2'))    model.add(Dense(512, activation='relu', name='block2_dense3', kernel_regularizer=l2(1e-4)))    model.add(MaxoutDense(512, nb_feature=4, name="block2_maxout3"))    model.add(Dense(classe_nums, activation='softmax', name="predict"))    # plot_model(model, to_file='model_struct.png', show_shapes=True, show_layer_names=False)    model.summary() 
开发者ID:houzhengzhang,项目名称:speaker_recognition,代码行数:27,代码来源:plot_model_struct.py


示例8: shortcut_pool

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def shortcut_pool(inputs, output, filters=256, pool_type='max', shortcut=True):    """        ResNet(shortcut连接|skip连接|residual连接),         这里是用shortcut连接. 恒等映射, block+f(block)        再加上 downsampling实现        参考: https://github.com/zonetrooper32/VDCNN/blob/keras_version/vdcnn.py    :param inputs: tensor    :param output: tensor    :param filters: int    :param pool_type: str, 'max'、'k-max' or 'conv' or other    :param shortcut: boolean    :return: tensor    """    if shortcut:        conv_2 = Conv1D(filters=filters, kernel_size=1, strides=2, padding='SAME')(inputs)        conv_2 = BatchNormalization()(conv_2)        output = downsampling(output, pool_type=pool_type)        out = Add()([output, conv_2])    else:        out = ReLU(inputs)        out = downsampling(out, pool_type=pool_type)    if pool_type is not None: # filters翻倍        out = Conv1D(filters=filters*2, kernel_size=1, strides=1, padding='SAME')(out)        out = BatchNormalization()(out)    return out 
开发者ID:yongzhuo,项目名称:Keras-TextClassification,代码行数:27,代码来源:graph.py


示例9: downsampling

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv1D [as 别名]def downsampling(inputs, pool_type='max'):    """        In addition, downsampling with stride 2 essentially doubles the effective coverage         (i.e., coverage in the original document) of the convolution kernel;         therefore, after going through downsampling L times,         associations among words within a distance in the order of 2L can be represented.         Thus, deep pyramid CNN is computationally ef
Python layers.Bidirectional方法代码示例
Python layers.AveragePooling2D方法代码示例
万事OK自学网:51自学网_软件自学网_CAD自学网自学excel、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。