您当前的位置:首页 > IT编程 > Keras
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch | 异常检测 | Transformers | 情感分类 | 知识图谱 |

自学教程:Python layers.merge方法代码示例

51自学网 2020-12-01 11:08:51
  Keras
这篇教程Python layers.merge方法代码示例写得很实用,希望能帮到您。

本文整理汇总了Python中keras.layers.merge方法的典型用法代码示例。如果您正苦于以下问题:Python layers.merge方法的具体用法?Python layers.merge怎么用?Python layers.merge使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块keras.layers的用法示例。

在下文中一共展示了layers.merge方法的30个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: build_encoder

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def build_encoder(self):        # Encoder        img = Input(shape=self.img_shape)        h = Flatten()(img)        h = Dense(512)(h)        h = LeakyReLU(alpha=0.2)(h)        h = Dense(512)(h)        h = LeakyReLU(alpha=0.2)(h)        mu = Dense(self.latent_dim)(h)        log_var = Dense(self.latent_dim)(h)        latent_repr = merge([mu, log_var],                mode=lambda p: p[0] + K.random_normal(K.shape(p[0])) * K.exp(p[1] / 2),                output_shape=lambda p: p[0])        return Model(img, latent_repr) 
开发者ID:eriklindernoren,项目名称:Keras-GAN,代码行数:19,代码来源:aae.py


示例2: yolo_body

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def yolo_body(inputs, num_anchors, num_classes):    """Create YOLO_V2 model CNN body in Keras."""    darknet = Model(inputs, darknet_body()(inputs))    conv13 = darknet.get_layer('batchnormalization_13').output    conv20 = compose(        DarknetConv2D_BN_Leaky(1024, 3, 3),        DarknetConv2D_BN_Leaky(1024, 3, 3))(darknet.output)    # TODO: Allow Keras Lambda to use func arguments for output_shape?    conv13_reshaped = Lambda(        space_to_depth_x2,        output_shape=space_to_depth_x2_output_shape,        name='space_to_depth')(conv13)    # Concat conv13 with conv20.    x = merge([conv13_reshaped, conv20], mode='concat')    x = DarknetConv2D_BN_Leaky(1024, 3, 3)(x)    x = DarknetConv2D(num_anchors * (num_classes + 5), 1, 1)(x)    return Model(inputs, x) 
开发者ID:PiSimo,项目名称:PiCamNN,代码行数:21,代码来源:keras_yolo.py


示例3: get_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def get_model(num_users, num_items, latent_dim, regs=[0,0]):    # Input variables    user_input = Input(shape=(1,), dtype='int32', name = 'user_input')    item_input = Input(shape=(1,), dtype='int32', name = 'item_input')    MF_Embedding_User = Embedding(input_dim = num_users, output_dim = latent_dim, name = 'user_embedding',                                  init = init_normal, W_regularizer = l2(regs[0]), input_length=1)    MF_Embedding_Item = Embedding(input_dim = num_items, output_dim = latent_dim, name = 'item_embedding',                                  init = init_normal, W_regularizer = l2(regs[1]), input_length=1)           # Crucial to flatten an embedding vector!    user_latent = Flatten()(MF_Embedding_User(user_input))    item_latent = Flatten()(MF_Embedding_Item(item_input))        # Element-wise product of user and item embeddings     predict_vector = merge([user_latent, item_latent], mode = 'mul')        # Final prediction layer    #prediction = Lambda(lambda x: K.sigmoid(K.sum(x)), output_shape=(1,))(predict_vector)    prediction = Dense(1, activation='sigmoid', init='lecun_uniform', name = 'prediction')(predict_vector)        model = Model(input=[user_input, item_input],                 output=prediction)    return model 
开发者ID:hexiangnan,项目名称:neural_collaborative_filtering,代码行数:27,代码来源:GMF.py


示例4: block_inception_a

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def block_inception_a(input):    if K.image_dim_ordering() == "th":        channel_axis = 1    else:        channel_axis = -1    branch_0 = conv2d_bn(input, 96, 1, 1)    branch_1 = conv2d_bn(input, 64, 1, 1)    branch_1 = conv2d_bn(branch_1, 96, 3, 3)    branch_2 = conv2d_bn(input, 64, 1, 1)    branch_2 = conv2d_bn(branch_2, 96, 3, 3)    branch_2 = conv2d_bn(branch_2, 96, 3, 3)    branch_3 = AveragePooling2D((3,3), strides=(1,1), border_mode='same')(input)    branch_3 = conv2d_bn(branch_3, 96, 1, 1)    x = merge([branch_0, branch_1, branch_2, branch_3], mode='concat', concat_axis=channel_axis)    return x 
开发者ID:filonenkoa,项目名称:cnn_evaluation_smoke,代码行数:22,代码来源:inception_v4.py


示例5: block_reduction_a

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def block_reduction_a(input):    if K.image_dim_ordering() == "th":        channel_axis = 1    else:        channel_axis = -1    branch_0 = conv2d_bn(input, 384, 3, 3, subsample=(2,2), border_mode='valid')    branch_1 = conv2d_bn(input, 192, 1, 1)    branch_1 = conv2d_bn(branch_1, 224, 3, 3)    branch_1 = conv2d_bn(branch_1, 256, 3, 3, subsample=(2,2), border_mode='valid')    branch_2 = MaxPooling2D((3,3), strides=(2,2), border_mode='valid')(input)    x = merge([branch_0, branch_1, branch_2], mode='concat', concat_axis=channel_axis)    return x 
开发者ID:filonenkoa,项目名称:cnn_evaluation_smoke,代码行数:18,代码来源:inception_v4.py


示例6: block_inception_b

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def block_inception_b(input):    if K.image_dim_ordering() == "th":        channel_axis = 1    else:        channel_axis = -1    branch_0 = conv2d_bn(input, 384, 1, 1)    branch_1 = conv2d_bn(input, 192, 1, 1)    branch_1 = conv2d_bn(branch_1, 224, 1, 7)    branch_1 = conv2d_bn(branch_1, 256, 7, 1)    branch_2 = conv2d_bn(input, 192, 1, 1)    branch_2 = conv2d_bn(branch_2, 192, 7, 1)    branch_2 = conv2d_bn(branch_2, 224, 1, 7)    branch_2 = conv2d_bn(branch_2, 224, 7, 1)    branch_2 = conv2d_bn(branch_2, 256, 1, 7)    branch_3 = AveragePooling2D((3,3), strides=(1,1), border_mode='same')(input)    branch_3 = conv2d_bn(branch_3, 128, 1, 1)    x = merge([branch_0, branch_1, branch_2, branch_3], mode='concat', concat_axis=channel_axis)    return x 
开发者ID:filonenkoa,项目名称:cnn_evaluation_smoke,代码行数:25,代码来源:inception_v4.py


示例7: block_reduction_b

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def block_reduction_b(input):    if K.image_dim_ordering() == "th":        channel_axis = 1    else:        channel_axis = -1    branch_0 = conv2d_bn(input, 192, 1, 1)    branch_0 = conv2d_bn(branch_0, 192, 3, 3, subsample=(2, 2), border_mode='valid')    branch_1 = conv2d_bn(input, 256, 1, 1)    branch_1 = conv2d_bn(branch_1, 256, 1, 7)    branch_1 = conv2d_bn(branch_1, 320, 7, 1)    branch_1 = conv2d_bn(branch_1, 320, 3, 3, subsample=(2,2), border_mode='valid')    branch_2 = MaxPooling2D((3, 3), strides=(2, 2), border_mode='valid')(input)    x = merge([branch_0, branch_1, branch_2], mode='concat', concat_axis=channel_axis)    return x 
开发者ID:filonenkoa,项目名称:cnn_evaluation_smoke,代码行数:20,代码来源:inception_v4.py


示例8: _build

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def _build(self):        print('Building Graph ...')        inputs = Input(shape=(self.window_size, self.userTagIntent_vocab_size),                       name='tagIntent_input')        lstm_forward = LSTM(output_dim=self.hidden_size,                            return_sequences=False,                            name='LSTM_forward')(inputs)        lstm_forward = Dropout(self.dropout)(lstm_forward)        lstm_backward = LSTM(output_dim=self.hidden_size,                             return_sequences=False,                             go_backwards=True,                             name='LSTM_backward')(inputs)        lstm_backward = Dropout(self.dropout)(lstm_backward)        lstm_concat = merge([lstm_forward, lstm_backward],                            mode='concat', concat_axis=-1,                            name='merge_bidirections')        act_softmax = Dense(output_dim=self.agentAct_vocab_size,                            activation='sigmoid')(lstm_concat)        self.model = Model(input=inputs, output=act_softmax)        self.model.compile(optimizer=self.optimizer,                           loss='binary_crossentropy') 
开发者ID:XuesongYang,项目名称:end2end_dialog,代码行数:23,代码来源:AgentActClassifyingModel.py


示例9: identity_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def identity_block(input_tensor, kernel_size, filters, stage, block):    nb_filter1, nb_filter2, nb_filter3 = filters    bn_axis = 1    conv_name_base = 'res' + str(stage) + block + '_branch'    bn_name_base = 'bn' + str(stage) + block + '_branch'    x = Convolution2D(nb_filter1, 1, 1, name=conv_name_base + '2a')(input_tensor)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)    x = Activation('relu')(x)    x = Convolution2D(nb_filter2, kernel_size, kernel_size,                      border_mode='same', name=conv_name_base + '2b')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)    x = Activation('relu')(x)    x = Convolution2D(nb_filter3, 1, 1, name=conv_name_base + '2c')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)    x = merge([x, input_tensor], mode='sum')    x = Activation('relu')(x)    return x 
开发者ID:marcellacornia,项目名称:sam,代码行数:24,代码来源:dcn_resnet.py


示例10: conv_block_atrous

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def conv_block_atrous(input_tensor, kernel_size, filters, stage, block, atrous_rate=(2, 2)):    nb_filter1, nb_filter2, nb_filter3 = filters    bn_axis = 1    conv_name_base = 'res' + str(stage) + block + '_branch'    bn_name_base = 'bn' + str(stage) + block + '_branch'    x = Convolution2D(nb_filter1, 1, 1, name=conv_name_base + '2a')(input_tensor)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)    x = Activation('relu')(x)    x = AtrousConvolution2D(nb_filter2, kernel_size, kernel_size, border_mode='same',                            atrous_rate=atrous_rate, name=conv_name_base + '2b')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)    x = Activation('relu')(x)    x = Convolution2D(nb_filter3, 1, 1, name=conv_name_base + '2c')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)    shortcut = Convolution2D(nb_filter3, 1, 1, name=conv_name_base + '1')(input_tensor)    shortcut = BatchNormalization(axis=bn_axis, name=bn_name_base + '1')(shortcut)    x = merge([x, shortcut], mode='sum')    x = Activation('relu')(x)    return x 
开发者ID:marcellacornia,项目名称:sam,代码行数:27,代码来源:dcn_resnet.py


示例11: downsample_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def downsample_block(x, nb_channels, kernel_size=3, bottleneck=True,                     l2_reg=1e-4):    if bottleneck:        out = bottleneck_layer(x, nb_channels, kernel_size=kernel_size,                               stride=2, l2_reg=l2_reg)        # The output channels is 4x bigger on this case        nb_channels = nb_channels * 4    else:        out = two_conv_layer(x, nb_channels, kernel_size=kernel_size,                             stride=2, l2_reg=l2_reg)    # Projection on the shortcut    proj = Convolution2D(nb_channels, 1, 1, subsample=(2, 2),                         border_mode='valid', init='he_normal',                         W_regularizer=l2(l2_reg), bias=False)(x)    # proj = AveragePooling2D((1, 1), (2, 2))(x)    out = merge([proj, out], mode='sum')    return out 
开发者ID:robertomest,项目名称:convnet-study,代码行数:19,代码来源:resnet.py


示例12: identity_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def identity_block(x, nb_filter, kernel_size=3):    k1, k2, k3 = nb_filter    shortcut = x     out = Conv2D(k1, kernel_size=(1,1), strides=(1,1),padding="valid",activation="relu")(x)    out = BatchNormalization(axis=3)(out)    out = Conv2D(k2, kernel_size=(3,3), strides=(1,1), padding='same',activation="relu")(out)    out = BatchNormalization(axis=3)(out)    out = Conv2D(k3, kernel_size=(1,1), strides=(1,1),padding="valid")(out)    out = BatchNormalization(axis=3)(out)    # out = merge([out, shortcut], mode='sum')    out= layers.add([out,shortcut])      out = Activation('relu')(out)    return out 
开发者ID:jarvisqi,项目名称:deep_learning,代码行数:19,代码来源:resnet.py


示例13: conv_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def conv_block(x, nb_filter, kernel_size=3):    k1, k2, k3 = nb_filter    shortcut = x         out = Conv2D(k1, kernel_size=(1,1), strides=(2,2), padding="valid",activation="relu")(x)    out = BatchNormalization(axis=3)(out)    out = out = Conv2D(k2, kernel_size=(kernel_size,kernel_size), strides=(1,1), padding="same",activation="relu")(out)    out = BatchNormalization()(out)    out = Conv2D(k3, kernel_size=(1,1), strides=(1,1), padding="valid")(out)    out = BatchNormalization(axis=3)(out)    shortcut = Conv2D(k3, kernel_size=(1,1), strides=(2,2), padding="valid")(shortcut)    shortcut = BatchNormalization(axis=3)(shortcut)    # out = merge([out, shortcut], mode='sum')    out = layers.add([out, shortcut])    out = Activation('relu')(out)    return out 
开发者ID:jarvisqi,项目名称:deep_learning,代码行数:22,代码来源:resnet.py


示例14: _shortcut

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def _shortcut(input, residual):    # Expand channels of shortcut to match residual.    # Stride appropriately to match residual (width, height)    # Should be int if network architecture is correctly configured.    stride_width = input._keras_shape[2] / residual._keras_shape[2]    stride_height = input._keras_shape[3] / residual._keras_shape[3]    equal_channels = residual._keras_shape[1] == input._keras_shape[1]    shortcut = input    # 1 X 1 conv if shape is different. Else identity.    if stride_width > 1 or stride_height > 1 or not equal_channels:        shortcut = Convolution2D(nb_filter=residual._keras_shape[1], nb_row=1, nb_col=1,                                 subsample=(stride_width, stride_height),                                 init="he_normal", border_mode="valid")(input)    return merge([shortcut, residual], mode="sum")# Builds a residual block with repeating bottleneck blocks. 
开发者ID:yihui-he,项目名称:u-net,代码行数:21,代码来源:train_res.py


示例15: _up_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def _up_block(block,mrge, nb_filters):    up = merge([Convolution2D(2*nb_filters, 2, 2, border_mode='same')(UpSampling2D(size=(2, 2))(block)), mrge], mode='concat', concat_axis=1)    # conv = Convolution2D(4*nb_filters, 1, 1, activation='relu', border_mode='same')(up)    conv = Convolution2D(nb_filters, 3, 3, activation='relu', border_mode='same')(up)    conv = Convolution2D(nb_filters, 3, 3, activation='relu', border_mode='same')(conv)    # conv = Convolution2D(4*nb_filters, 1, 1, activation='relu', border_mode='same')(conv)    # conv = Convolution2D(nb_filters, 3, 3, activation='relu', border_mode='same')(conv)    # conv = Convolution2D(nb_filters, 1, 1, activation='relu', border_mode='same')(conv)        # conv = Convolution2D(4*nb_filters, 1, 1, activation='relu', border_mode='same')(conv)    # conv = Convolution2D(nb_filters, 3, 3, activation='relu', border_mode='same')(conv)    # conv = Convolution2D(nb_filters, 1, 1, activation='relu', border_mode='same')(conv)    return conv# http://arxiv.org/pdf/1512.03385v1.pdf# 50 Layer resnet 
开发者ID:yihui-he,项目名称:u-net,代码行数:21,代码来源:train_res.py


示例16: build_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def build_model(n_classes):    if K.image_dim_ordering() == 'th':        input_shape = (1, N_MEL_BANDS, SEGMENT_DUR)        channel_axis = 1    else:        input_shape = (N_MEL_BANDS, SEGMENT_DUR, 1)        channel_axis = 3    melgram_input = Input(shape=input_shape)    m_sizes = [50, 70]    n_sizes = [1, 3, 5]    n_filters = [128, 64, 32]    maxpool_const = 4    layers = list()    for m_i in m_sizes:        for i, n_i in enumerate(n_sizes):            x = Convolution2D(n_filters[i], m_i, n_i,                              border_mode='same',                              init='he_normal',                              W_regularizer=l2(1e-5),                              name=str(n_i)+'_'+str(m_i)+'_'+'conv')(melgram_input)            x = BatchNormalization(axis=channel_axis, mode=0, name=str(n_i)+'_'+str(m_i)+'_'+'bn')(x)            x = ELU()(x)            x = MaxPooling2D(pool_size=(N_MEL_BANDS, SEGMENT_DUR/maxpool_const), name=str(n_i)+'_'+str(m_i)+'_'+'pool')(x)            x = Flatten(name=str(n_i)+'_'+str(m_i)+'_'+'flatten')(x)            layers.append(x)    x = merge(layers, mode='concat', concat_axis=channel_axis)    x = Dropout(0.5)(x)    x = Dense(n_classes, init='he_normal', W_regularizer=l2(1e-5), activation='softmax', name='prediction')(x)    model = Model(melgram_input, x)    return model 
开发者ID:Veleslavia,项目名称:EUSIPCO2017,代码行数:38,代码来源:singlelayer.py


示例17: build_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def build_model(self):        input = Input(shape=self.state_size)        shared = Conv2D(32, (8, 8), strides=(4, 4), activation='relu')(input)        shared = Conv2D(64, (4, 4), strides=(2, 2), activation='relu')(shared)        shared = Conv2D(64, (3, 3), strides=(1, 1), activation='relu')(shared)        flatten = Flatten()(shared)        # network separate state value and advantages        advantage_fc = Dense(512, activation='relu')(flatten)        advantage = Dense(self.action_size)(advantage_fc)        advantage = Lambda(lambda a: a[:, :] - K.mean(a[:, :], keepdims=True),                           output_shape=(self.action_size,))(advantage)        value_fc = Dense(512, activation='relu')(flatten)        value =  Dense(1)(value_fc)        value = Lambda(lambda s: K.expand_dims(s[:, 0], -1),                       output_shape=(self.action_size,))(value)        # network merged and make Q Value        q_value = merge([value, advantage], mode='sum')        model = Model(inputs=input, outputs=q_value)        model.summary()        return model    # after some time interval update the target model to be same with model 
开发者ID:rlcode,项目名称:reinforcement-learning,代码行数:28,代码来源:breakout_dueling_ddqn.py


示例18: MatchScore

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def MatchScore(l, r, mode="euclidean"):    if mode == "euclidean":        return merge(            [l, r],            mode=compute_euclidean_match_score,            output_shape=lambda shapes: (None, shapes[0][1], shapes[1][1])        ) 
开发者ID:GauravBh1010tt,项目名称:DeepLearn,代码行数:9,代码来源:model_abcnn.py


示例19: identity_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def identity_block(input_tensor, kernel_size, filters, stage, block):    '''The identity_block is the block that has no conv layer at shortcut    # Arguments        input_tensor: input tensor        kernel_size: defualt 3, the kernel size of middle conv layer at main path        filters: list of integers, the nb_filters of 3 conv layer at main path        stage: integer, current stage label, used for generating layer names        block: 'a','b'..., current block label, used for generating layer names    '''    nb_filter1, nb_filter2, nb_filter3 = filters    if K.image_dim_ordering() == 'tf':        bn_axis = 3    else:        bn_axis = 1    conv_name_base = 'res' + str(stage) + block + '_branch'    bn_name_base = 'bn' + str(stage) + block + '_branch'    x = Convolution2D(nb_filter1, 1, 1, name=conv_name_base + '2a')(input_tensor)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)    x = Activation('relu')(x)    x = Convolution2D(nb_filter2, kernel_size, kernel_size,                      border_mode='same', name=conv_name_base + '2b')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)    x = Activation('relu')(x)    x = Convolution2D(nb_filter3, 1, 1, name=conv_name_base + '2c')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)    x = merge([x, input_tensor], mode='sum')    x = Activation('relu')(x)    return x 
开发者ID:ChunML,项目名称:DeepLearning,代码行数:35,代码来源:resnet50.py


示例20: conv_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)):    '''conv_block is the block that has a conv layer at shortcut    # Arguments        input_tensor: input tensor        kernel_size: defualt 3, the kernel size of middle conv layer at main path        filters: list of integers, the nb_filters of 3 conv layer at main path        stage: integer, current stage label, used for generating layer names        block: 'a','b'..., current block label, used for generating layer names    Note that from stage 3, the first conv layer at main path is with subsample=(2,2)    And the shortcut should have subsample=(2,2) as well    '''    nb_filter1, nb_filter2, nb_filter3 = filters    if K.image_dim_ordering() == 'tf':        bn_axis = 3    else:        bn_axis = 1    conv_name_base = 'res' + str(stage) + block + '_branch'    bn_name_base = 'bn' + str(stage) + block + '_branch'    x = Convolution2D(nb_filter1, 1, 1, subsample=strides,                      name=conv_name_base + '2a')(input_tensor)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)    x = Activation('relu')(x)    x = Convolution2D(nb_filter2, kernel_size, kernel_size, border_mode='same',                      name=conv_name_base + '2b')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)    x = Activation('relu')(x)    x = Convolution2D(nb_filter3, 1, 1, name=conv_name_base + '2c')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)    shortcut = Convolution2D(nb_filter3, 1, 1, subsample=strides,                             name=conv_name_base + '1')(input_tensor)    shortcut = BatchNormalization(axis=bn_axis, name=bn_name_base + '1')(shortcut)    x = merge([x, shortcut], mode='sum')    x = Activation('relu')(x)    return x 
开发者ID:ChunML,项目名称:DeepLearning,代码行数:43,代码来源:resnet50.py


示例21: cunn_keras

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def cunn_keras(img_rows=FLAGS.img_rows, img_cols=FLAGS.img_cols, channels=FLAGS.nb_channels, nb_classes=FLAGS.nb_classes):    '''    Defines the VGG 16 model using the Keras Sequential model    :param img_rows: number of row in the image    :param img_cols: number of columns in the image    :param channels: number of color channels (e.g., 1 for MNIST)    :param nb_classes: the number of output classes    :return: a Keras model. Call with model(<input_tensor>)    '''    input = Input(shape=(img_rows, img_cols, channels))    conv1 = Convolution2D(32,5,5, border_mode='same', subsample=(1,1), activation='relu')(input)    pool1 = MaxPooling2D((2,2), strides=(2,2))(conv1)    conv2 = Convolution2D(64,5,5, border_mode='same', subsample=(1,1), activation='relu')(pool1)    pool2 = MaxPooling2D((2,2), strides=(2,2))(conv2)    conv3 = Convolution2D(128,5,5, border_mode='same', subsample=(1,1), activation='relu')(pool2)    pool3 = MaxPooling2D((2,2), strides=(2,2))(conv3)    flat1 = Flatten()(pool1)    flat2 = Flatten()(pool2)    flat3 = Flatten()(pool3)    flat_all = merge([flat1, flat2, flat3], mode='concat', concat_axis=1) #If this gives an error, update the keras tensorflow backend. It is likely that is making the call tf.concat(axis, [to_dense(x) for x in tensors]) in of tf.concat([to_dense(x) for x in tensors], axis)    fc = Dense(1024)(flat_all)    drop = Dropout(0.5)(fc)    fc2 = Dense(nb_classes)(drop)    output = Activation('softmax',name='prob')(fc2)    model = Model(input=input, output=output)    return model 
开发者ID:evtimovi,项目名称:robust_physical_perturbations,代码行数:37,代码来源:model.py


示例22: get_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def get_model(num_users, num_items, layers = [20,10], reg_layers=[0,0]):    assert len(layers) == len(reg_layers)    num_layer = len(layers) #Number of layers in the MLP    # Input variables    user_input = Input(shape=(1,), dtype='int32', name = 'user_input')    item_input = Input(shape=(1,), dtype='int32', name = 'item_input')    MLP_Embedding_User = Embedding(input_dim = num_users, output_dim = layers[0]/2, name = 'user_embedding',                                  init = init_normal, W_regularizer = l2(reg_layers[0]), input_length=1)    MLP_Embedding_Item = Embedding(input_dim = num_items, output_dim = layers[0]/2, name = 'item_embedding',                                  init = init_normal, W_regularizer = l2(reg_layers[0]), input_length=1)           # Crucial to flatten an embedding vector!    user_latent = Flatten()(MLP_Embedding_User(user_input))    item_latent = Flatten()(MLP_Embedding_Item(item_input))        # The 0-th layer is the concatenation of embedding layers    vector = merge([user_latent, item_latent], mode = 'concat')        # MLP layers    for idx in xrange(1, num_layer):        layer = Dense(layers[idx], W_regularizer= l2(reg_layers[idx]), activation='relu', name = 'layer%d' %idx)        vector = layer(vector)            # Final prediction layer    prediction = Dense(1, activation='sigmoid', init='lecun_uniform', name = 'prediction')(vector)        model = Model(input=[user_input, item_input],                   output=prediction)        return model 
开发者ID:hexiangnan,项目名称:neural_collaborative_filtering,代码行数:33,代码来源:MLP.py


示例23: Residual

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def Residual(feat_maps_in, feat_maps_out, prev_layer):    '''    A customizable residual unit with convolutional and shortcut blocks    Args:      feat_maps_in: number of channels/filters coming in, from input or previous layer      feat_maps_out: how many output channels/filters this block will produce      prev_layer: the previous layer    '''    skip = skip_block(feat_maps_in, feat_maps_out, prev_layer)    conv = conv_block(feat_maps_out, prev_layer)    print('Residual block mapping '+str(feat_maps_in)+' channels to '+str(feat_maps_out)+' channels built')    return merge([skip, conv], mode='sum') # the residual connection 
开发者ID:relh,项目名称:keras-residual-unit,代码行数:17,代码来源:residual.py


示例24: build_MLP_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def build_MLP_model(self):        NUM_WINDOW_FEATURES = 2        left_token_input = Input(name='left_token_input', shape=(NUM_WINDOW_FEATURES,))        left_token_embedding = Embedding(output_dim=self.preprocessor.embedding_dims, input_dim=self.preprocessor.max_features,                                        input_length=NUM_WINDOW_FEATURES)(left_token_input)        left_token_embedding = Flatten(name="left_token_embedding")(left_token_embedding)        n_PoS_tags = len(self.tag_names)        left_PoS_input = Input(name='left_PoS_input', shape=(n_PoS_tags,))        #target_token_input = Input(name='target_token_input', shape=(1,))        right_token_input = Input(name='right_token_input', shape=(NUM_WINDOW_FEATURES,))        right_token_embedding = Embedding(output_dim=self.preprocessor.embedding_dims, input_dim=self.preprocessor.max_features,                                          input_length=NUM_WINDOW_FEATURES)(right_token_input)        right_PoS_input = Input(name='right_PoS_input', shape=(n_PoS_tags,))        right_token_embedding = Flatten(name="right_token_embedding")(right_token_embedding)        other_features_input = Input(name='other_feature_inputs', shape=(4,))        x = merge([left_token_embedding, #target_token_input,                    right_token_embedding,                    left_PoS_input, right_PoS_input, other_features_input],                    mode='concat', concat_axis=1)        x = Dense(128, name="hidden1", activation='relu')(x)        x = Dropout(.2)(x)        x = Dense(64, name="hidden2", activation='relu')(x)        output = Dense(1, name="prediction", activation='sigmoid')(x)        self.model = Model([left_token_input, left_PoS_input, #target_token_input,                            right_token_input, right_PoS_input, other_features_input],                           output=[output])        self.model.compile(optimizer="adam", loss="binary_crossentropy") 
开发者ID:ijmarshall,项目名称:robotreviewer,代码行数:37,代码来源:sample_size_NN.py


示例25: __init__

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def __init__(self):        from keras.preprocessing import sequence        from keras.models import load_model        from keras.models import Sequential        from keras.preprocessing import sequence        from keras.layers import Dense, Dropout, Activation, Lambda, Input, merge, Flatten        from keras.layers import Embedding        from keras.layers import Convolution1D, MaxPooling1D        from keras import backend as K        from keras.models import Model        from keras.regularizers import l2        global sequence, load_model, Sequential, Dense, Dropout, Activation, Lambda, Input, merge, Flatten        global Embedding, Convolution1D, MaxPooling1D, K, Model, l2        self.svm_clf = MiniClassifier(os.path.join(robotreviewer.DATA_ROOT, 'rct/rct_svm_weights.npz'))        cnn_weight_files = glob.glob(os.path.join(robotreviewer.DATA_ROOT, 'rct/*.h5'))        self.cnn_clfs = [load_model(cnn_weight_file) for cnn_weight_file in cnn_weight_files]        self.svm_vectorizer = HashingVectorizer(binary=False, ngram_range=(1, 1), stop_words='english')        self.cnn_vectorizer = KerasVectorizer(vocab_map_file=os.path.join(robotreviewer.DATA_ROOT, 'rct/cnn_vocab_map.pck'), stop_words='english')        with open(os.path.join(robotreviewer.DATA_ROOT, 'rct/rct_model_calibration.json'), 'r') as f:            self.constants = json.load(f)        self.calibration_lr = {}        with open(os.path.join(robotreviewer.DATA_ROOT, 'rct/svm_cnn_ptyp_calibration.pck'), 'rb') as f:            self.calibration_lr['svm_cnn_ptyp'] = pickle.load(f)        with open(os.path.join(robotreviewer.DATA_ROOT, 'rct/svm_cnn_calibration.pck'), 'rb') as f:            self.calibration_lr['svm_cnn'] = pickle.load(f) 
开发者ID:ijmarshall,项目名称:robotreviewer,代码行数:29,代码来源:rct_robot.py


示例26: make_parallel

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def make_parallel(model, gpu_count):    def get_slice(data, idx, parts):        shape = tf.shape(data)        size = tf.concat(0, [ shape[:1] // parts, shape[1:] ])        stride = tf.concat(0, [ shape[:1] // parts, shape[1:]*0 ])        start = stride * idx        return tf.slice(data, start, size)    outputs_all = []    for i in range(len(model.outputs)):        outputs_all.append([])    #Place a copy of the model on each GPU, each getting a slice of the batch    for i in range(gpu_count):        with tf.device('/gpu:%d' % i):            with tf.name_scope('tower_%d' % i) as scope:                inputs = []                #Slice each input into a piece for processing on this GPU                for x in model.inputs:                    input_shape = tuple(x.get_shape().as_list())[1:]                    slice_n = Lambda(get_slice, output_shape=input_shape, arguments={'idx':i,'parts':gpu_count})(x)                    inputs.append(slice_n)                                outputs = model(inputs)                                if not isinstance(outputs, list):                    outputs = [outputs]                                #Save all the outputs for merging back together later                for l in range(len(outputs)):                    outputs_all[l].append(outputs[l])    # merge outputs on CPU    with tf.device('/cpu:0'):        merged = []        for outputs in outputs_all:            merged.append(merge(outputs, mode='concat', concat_axis=0))                	return Model(input=model.inputs, output=merged) 
开发者ID:MateLabs,项目名称:All-Conv-Keras,代码行数:42,代码来源:allconv.py


示例27: dense_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def dense_block(x, nb_layers, nb_filter, growth_rate, dropout_rate=None, weight_decay=1E-4):    ''' Build a dense_block where the output of each conv_block is fed to subsequent ones    Args:        x: keras tensor        nb_layers: the number of layers of conv_block to append to the model.        nb_filter: number of filters        growth_rate: growth rate        dropout_rate: dropout rate        weight_decay: weight decay factor    Returns: keras tensor with nb_layers of conv_block appended    '''    concat_axis = 1 if K.image_dim_ordering() == "th" else -1    feature_list = [x]    for i in range(nb_layers):        x = conv_block(x, growth_rate, dropout_rate, weight_decay)        feature_list.append(x)        x = merge(feature_list, mode='concat', concat_axis=concat_axis)        nb_filter += growth_rate    return x, nb_filter 
开发者ID:cvjena,项目名称:semantic-embeddings,代码行数:28,代码来源:densenet_fast.py


示例28: dueling_dqn

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def dueling_dqn(input_shape, action_size, learning_rate):        state_input = Input(shape=(input_shape))        x = Convolution2D(32, 8, 8, subsample=(4, 4), activation='relu')(state_input)        x = Convolution2D(64, 4, 4, subsample=(2, 2), activation='relu')(x)        x = Convolution2D(64, 3, 3, activation='relu')(x)        x = Flatten()(x)        # state value tower - V        state_value = Dense(256, activation='relu')(x)        state_value = Dense(1, init='uniform')(state_value)        state_value = Lambda(lambda s: K.expand_dims(s[:, 0], dim=-1), output_shape=(action_size,))(state_value)        # action advantage tower - A        action_advantage = Dense(256, activation='relu')(x)        action_advantage = Dense(action_size)(action_advantage)        action_advantage = Lambda(lambda a: a[:, :] - K.mean(a[:, :], keepdims=True), output_shape=(action_size,))(action_advantage)        # merge to state-action value function Q        state_action_value = merge([state_value, action_advantage], mode='sum')        model = Model(input=state_input, output=state_action_value)        #model.compile(rmsprop(lr=learning_rate), "mse")        adam = Adam(lr=learning_rate)        model.compile(loss='mse',optimizer=adam)        return model 
开发者ID:flyyufelix,项目名称:VizDoom-Keras-RL,代码行数:29,代码来源:networks.py


示例29: concatenate_layers

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def concatenate_layers(inputs, concat_axis, mode='concat'):    if KERAS_2:        assert mode == 'concat', "Only concatenation is supported in this wrapper"        return Concatenate(axis=concat_axis)(inputs)    else:        return merge(inputs=inputs, concat_axis=concat_axis, mode=mode) 
开发者ID:costapt,项目名称:vess2ret,代码行数:8,代码来源:models.py


示例30: residual_drop

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import merge [as 别名]def residual_drop(x, input_shape, output_shape, strides=(1, 1)):    global add_tables    nb_filter = output_shape[0]    conv = Convolution2D(nb_filter, 3, 3, subsample=strides,                         border_mode="same", W_regularizer=l2(weight_decay))(x)    conv = BatchNormalization(axis=1)(conv)    conv = Activation("relu")(conv)    conv = Convolution2D(nb_filter, 3, 3,                         border_mode="same", W_regularizer=l2(weight_decay))(conv)    conv = BatchNormalization(axis=1)(conv)    if strides[0] >= 2:        x = AveragePooling2D(strides)(x)    if (output_shape[0] - input_shape[0]) > 0:        pad_shape = (1,                     output_shape[0] - input_shape[0],                     output_shape[1],                     output_shape[2])        padding = K.zeros(pad_shape)        padding = K.repeat_elements(padding, K.shape(x)[0], axis=0)        x = Lambda(lambda y: K.concatenate([y, padding], axis=1),                   output_shape=output_shape)(x)    _death_rate = K.variable(death_rate)    scale = K.ones_like(conv) - _death_rate    conv = Lambda(lambda c: K.in_test_phase(scale * c, c),                  output_shape=output_shape)(conv)    out = merge([conv, x], mode="sum")    out = Activation("relu")(out)    gate = K.variable(1, dtype="uint8")    add_tables += [{"death_rate": _death_rate, "gate": gate}]    return Lambda(lambda tensors: K.switch(gate, tensors[0], tensors[1]),                  output_shape=output_shape)([out, x]) 
开发者ID:dblN,项目名称:stochastic_depth_keras,代码行数:39,代码来源:train.py


万事OK自学网:51自学网_软件自学网_CAD自学网自学excel、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。