您当前的位置:首页 > IT编程 > Keras
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch | 异常检测 | Transformers | 情感分类 | 知识图谱 |

自学教程:Python layers.Conv2D方法代码示例

51自学网 2020-12-01 11:08:44
  Keras
这篇教程Python layers.Conv2D方法代码示例写得很实用,希望能帮到您。

本文整理汇总了Python中keras.layers.Conv2D方法的典型用法代码示例。如果您正苦于以下问题:Python layers.Conv2D方法的具体用法?Python layers.Conv2D怎么用?Python layers.Conv2D使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块keras.layers的用法示例。

在下文中一共展示了layers.Conv2D方法的30个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: build_cae_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def build_cae_model(height=32, width=32, channel=3):    """    build convolutional autoencoder model    """    input_img = Input(shape=(height, width, channel))    # encoder    net = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)    net = MaxPooling2D((2, 2), padding='same')(net)    net = Conv2D(8, (3, 3), activation='relu', padding='same')(net)    net = MaxPooling2D((2, 2), padding='same')(net)    net = Conv2D(4, (3, 3), activation='relu', padding='same')(net)    encoded = MaxPooling2D((2, 2), padding='same', name='enc')(net)    # decoder    net = Conv2D(4, (3, 3), activation='relu', padding='same')(encoded)    net = UpSampling2D((2, 2))(net)    net = Conv2D(8, (3, 3), activation='relu', padding='same')(net)    net = UpSampling2D((2, 2))(net)    net = Conv2D(16, (3, 3), activation='relu', padding='same')(net)    net = UpSampling2D((2, 2))(net)    decoded = Conv2D(channel, (3, 3), activation='sigmoid', padding='same')(net)    return Model(input_img, decoded) 
开发者ID:hiram64,项目名称:ocsvm-anomaly-detection,代码行数:26,代码来源:model.py


示例2: g_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def g_block(inp, fil, u = True):    if u:        out = UpSampling2D(interpolation = 'bilinear')(inp)    else:        out = Activation('linear')(inp)    skip = Conv2D(fil, 1, padding = 'same', kernel_initializer = 'he_normal')(out)    out = Conv2D(filters = fil, kernel_size = 3, padding = 'same', kernel_initializer = 'he_normal')(out)    out = LeakyReLU(0.2)(out)    out = Conv2D(filters = fil, kernel_size = 3, padding = 'same', kernel_initializer = 'he_normal')(out)    out = LeakyReLU(0.2)(out)    out = Conv2D(fil, 1, padding = 'same', kernel_initializer = 'he_normal')(out)    out = add([out, skip])    out = LeakyReLU(0.2)(out)    return out 
开发者ID:manicman1999,项目名称:Keras-BiGAN,代码行数:23,代码来源:bigan.py


示例3: d_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def d_block(inp, fil, p = True):    skip = Conv2D(fil, 1, padding = 'same', kernel_initializer = 'he_normal')(inp)    out = Conv2D(filters = fil, kernel_size = 3, padding = 'same', kernel_initializer = 'he_normal')(inp)    out = LeakyReLU(0.2)(out)    out = Conv2D(filters = fil, kernel_size = 3, padding = 'same', kernel_initializer = 'he_normal')(out)    out = LeakyReLU(0.2)(out)    out = Conv2D(fil, 1, padding = 'same', kernel_initializer = 'he_normal')(out)    out = add([out, skip])    out = LeakyReLU(0.2)(out)    if p:        out = AveragePooling2D()(out)    return out 
开发者ID:manicman1999,项目名称:Keras-BiGAN,代码行数:21,代码来源:bigan.py


示例4: build_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def build_model(x_train, num_classes):        # Reset default graph. Keras leaves old ops in the graph,        # which are ignored for execution but clutter graph        # visualization in TensorBoard.        tf.reset_default_graph()        inputs = KL.Input(shape=x_train.shape[1:], name="input_image")        x = KL.Conv2D(32, (3, 3), activation='relu', padding="same",                      name="conv1")(inputs)        x = KL.Conv2D(64, (3, 3), activation='relu', padding="same",                      name="conv2")(x)        x = KL.MaxPooling2D(pool_size=(2, 2), name="pool1")(x)        x = KL.Flatten(name="flat1")(x)        x = KL.Dense(128, activation='relu', name="dense1")(x)        x = KL.Dense(num_classes, activation='softmax', name="dense2")(x)        return KM.Model(inputs, x, "digit_classifier_model")    # Load MNIST Data 
开发者ID:dataiku,项目名称:dataiku-contrib,代码行数:21,代码来源:parallel_model.py


示例5: conv2d

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def conv2d(h_num_filters, h_filter_width, h_stride, h_use_bias):    def compile_fn(di, dh):        layer = layers.Conv2D(dh['num_filters'], (dh['filter_width'],) * 2,                              strides=(dh['stride'],) * 2,                              use_bias=dh['use_bias'],                              padding='SAME')        def fn(di):            return {'out': layer(di['in'])}        return fn    return siso_keras_module(        'Conv2D', compile_fn, {            'num_filters': h_num_filters,            'filter_width': h_filter_width,            'stride': h_stride,            'use_bias': h_use_bias,        }) 
开发者ID:negrinho,项目名称:deep_architect,代码行数:22,代码来源:keras_ops.py


示例6: InceptionLayer

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def InceptionLayer(self, a, b, c, d):        def func(x):            x1 = Conv2D(a, (1, 1), padding='same', activation='relu')(x)                        x2 = Conv2D(b, (1, 1), padding='same', activation='relu')(x)            x2 = Conv2D(b, (3, 3), padding='same', activation='relu')(x2)                        x3 = Conv2D(c, (1, 1), padding='same', activation='relu')(x)            x3 = Conv2D(c, (3, 3), dilation_rate = 2, strides = 1, padding='same', activation='relu')(x3)                        x4 = Conv2D(d, (1, 1), padding='same', activation='relu')(x)            x4 = Conv2D(d, (3, 3), dilation_rate = 3, strides = 1, padding='same', activation='relu')(x4)            y = Concatenate(axis = -1)([x1, x2, x3, x4])                        return y        return func 
开发者ID:DariusAf,项目名称:MesoNet,代码行数:19,代码来源:classifiers.py


示例7: _conv2d_same

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def _conv2d_same(x, filters, prefix, stride=1, kernel_size=3, rate=1):    # 计算padding的数量,hw是否需要收缩    if stride == 1:        return Conv2D(filters,                      (kernel_size, kernel_size),                      strides=(stride, stride),                      padding='same', use_bias=False,                      dilation_rate=(rate, rate),                      name=prefix)(x)    else:        kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)        pad_total = kernel_size_effective - 1        pad_beg = pad_total // 2        pad_end = pad_total - pad_beg        x = ZeroPadding2D((pad_beg, pad_end))(x)        return Conv2D(filters,                      (kernel_size, kernel_size),                      strides=(stride, stride),                      padding='valid', use_bias=False,                      dilation_rate=(rate, rate),                      name=prefix)(x) 
开发者ID:bubbliiiing,项目名称:Semantic-Segmentation,代码行数:23,代码来源:Xception.py


示例8: DCGAN_discriminator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def DCGAN_discriminator():    nb_filters = 64    nb_conv = int(np.floor(np.log(128) / np.log(2)))    list_filters = [nb_filters * min(8, (2 ** i)) for i in range(nb_conv)]    input_img = Input(shape=(128, 128, 3))    x = Conv2D(list_filters[0], (3, 3), strides=(2, 2), name="disc_conv2d_1", padding="same")(input_img)    x = BatchNormalization(axis=-1)(x)    x = LeakyReLU(0.2)(x)    # Next convs    for i, f in enumerate(list_filters[1:]):        name = "disc_conv2d_%s" % (i + 2)        x = Conv2D(f, (3, 3), strides=(2, 2), name=name, padding="same")(x)        x = BatchNormalization(axis=-1)(x)        x = LeakyReLU(0.2)(x)    x_flat = Flatten()(x)    x_out = Dense(1, activation="sigmoid", name="disc_dense")(x_flat)    discriminator_model = Model(inputs=input_img, outputs=[x_out])    return discriminator_model 
开发者ID:kirumang,项目名称:Pix2Pose,代码行数:22,代码来源:ae_model.py


示例9: _initial_conv_block_inception

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def _initial_conv_block_inception(input, initial_conv_filters, weight_decay=5e-4):    ''' Adds an initial conv block, with batch norm and relu for the DPN    Args:        input: input tensor        initial_conv_filters: number of filters for initial conv block        weight_decay: weight decay factor    Returns: a keras tensor    '''    channel_axis = 1 if K.image_data_format() == 'channels_first' else -1    x = Conv2D(initial_conv_filters, (7, 7), padding='same', use_bias=False, kernel_initializer='he_normal',               kernel_regularizer=l2(weight_decay), strides=(2, 2))(input)    x = BatchNormalization(axis=channel_axis)(x)    x = Activation('relu')(x)    x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)    return x 
开发者ID:titu1994,项目名称:Keras-DualPathNetworks,代码行数:20,代码来源:dual_path_network.py


示例10: _bn_relu_conv_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def _bn_relu_conv_block(input, filters, kernel=(3, 3), stride=(1, 1), weight_decay=5e-4):    ''' Adds a Batchnorm-Relu-Conv block for DPN    Args:        input: input tensor        filters: number of output filters        kernel: convolution kernel size        stride: stride of convolution    Returns: a keras tensor    '''    channel_axis = 1 if K.image_data_format() == 'channels_first' else -1    x = Conv2D(filters, kernel, padding='same', use_bias=False, kernel_initializer='he_normal',               kernel_regularizer=l2(weight_decay), strides=stride)(input)    x = BatchNormalization(axis=channel_axis)(x)    x = Activation('relu')(x)    return x 
开发者ID:titu1994,项目名称:Keras-DualPathNetworks,代码行数:18,代码来源:dual_path_network.py


示例11: _conv_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def _conv_block(inp, convs, skip=True):  x = inp  count = 0  len_convs = len(convs)  for conv in convs:    if count == (len_convs - 2) and skip:      skip_connection = x    count += 1    if conv['stride'] > 1: x = ZeroPadding2D(((1,0),(1,0)))(x) # peculiar padding as darknet prefer left and top    x = Conv2D(conv['filter'],           conv['kernel'],           strides=conv['stride'],           padding='valid' if conv['stride'] > 1 else 'same', # peculiar padding as darknet prefer left and top           name='conv_' + str(conv['layer_idx']),           use_bias=False if conv['bnorm'] else True)(x)    if conv['bnorm']: x = BatchNormalization(epsilon=0.001, name='bnorm_' + str(conv['layer_idx']))(x)    if conv['leaky']: x = LeakyReLU(alpha=0.1, name='leaky_' + str(conv['layer_idx']))(x)  return add([skip_connection, x]) if skip else x#SPP block uses three pooling layers of sizes [5, 9, 13] with strides one and all outputs together with the input are concatenated to be fed  #to the FC block 
开发者ID:produvia,项目名称:ai-platform,代码行数:24,代码来源:yolov3_weights_to_keras.py


示例12: conv_2d

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def conv_2d(filters, kernel_shape, strides, padding, input_shape=None):    """    Defines the right convolutional layer according to the    version of Keras that is installed.    :param filters: (required integer) the dimensionality of the output                    space (i.e. the number output of filters in the                    convolution)    :param kernel_shape: (required tuple or list of 2 integers) specifies                         the strides of the convolution along the width and                         height.    :param padding: (required string) can be either 'valid' (no padding around                    input or feature map) or 'same' (pad to ensure that the                    output feature map size is identical to the layer input)    :param input_shape: (optional) give input shape if this is the first                        layer of the model    :return: the Keras layer    """    if LooseVersion(keras.__version__) >= LooseVersion('2.0.0'):        if input_shape is not None:            return Conv2D(filters=filters, kernel_size=kernel_shape,                          strides=strides, padding=padding,                          input_shape=input_shape)        else:            return Conv2D(filters=filters, kernel_size=kernel_shape,                          strides=strides, padding=padding)    else:        if input_shape is not None:            return Convolution2D(filters, kernel_shape[0], kernel_shape[1],                                 subsample=strides, border_mode=padding,                                 input_shape=input_shape)        else:            return Convolution2D(filters, kernel_shape[0], kernel_shape[1],                                 subsample=strides, border_mode=padding) 
开发者ID:StephanZheng,项目名称:neural-fingerprinting,代码行数:35,代码来源:utils_keras.py


示例13: ss_bt

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def ss_bt(self, x, dilation, strides=(1, 1), padding='same'):        x1, x2 = self.channel_split(x)        filters = (int(x.shape[-1]) // self.groups)        x1 = layers.Conv2D(filters, kernel_size=(3, 1), strides=strides, padding=padding)(x1)        x1 = layers.Activation('relu')(x1)        x1 = layers.Conv2D(filters, kernel_size=(1, 3), strides=strides, padding=padding)(x1)        x1 = layers.BatchNormalization()(x1)        x1 = layers.Activation('relu')(x1)        x1 = layers.Conv2D(filters, kernel_size=(3, 1), strides=strides, padding=padding, dilation_rate=(dilation, 1))(            x1)        x1 = layers.Activation('relu')(x1)        x1 = layers.Conv2D(filters, kernel_size=(1, 3), strides=strides, padding=padding, dilation_rate=(1, dilation))(            x1)        x1 = layers.BatchNormalization()(x1)        x1 = layers.Activation('relu')(x1)        x2 = layers.Conv2D(filters, kernel_size=(1, 3), strides=strides, padding=padding)(x2)        x2 = layers.Activation('relu')(x2)        x2 = layers.Conv2D(filters, kernel_size=(3, 1), strides=strides, padding=padding)(x2)        x2 = layers.BatchNormalization()(x2)        x2 = layers.Activation('relu')(x2)        x2 = layers.Conv2D(filters, kernel_size=(1, 3), strides=strides, padding=padding, dilation_rate=(1, dilation))(            x2)        x2 = layers.Activation('relu')(x2)        x2 = layers.Conv2D(filters, kernel_size=(3, 1), strides=strides, padding=padding, dilation_rate=(dilation, 1))(            x2)        x2 = layers.BatchNormalization()(x2)        x2 = layers.Activation('relu')(x2)        x_concat = layers.concatenate([x1, x2], axis=-1)        x_add = layers.add([x, x_concat])        output = self.channel_shuffle(x_add)        return output 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:34,代码来源:lednet.py


示例14: down_sample

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def down_sample(self, x, filters):        x_filters = int(x.shape[-1])        x_conv = layers.Conv2D(filters - x_filters, kernel_size=3, strides=(2, 2), padding='same')(x)        x_pool = layers.MaxPool2D()(x)        x = layers.concatenate([x_conv, x_pool], axis=-1)        x = layers.BatchNormalization()(x)        x = layers.Activation('relu')(x)        return x 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:10,代码来源:lednet.py


示例15: apn_module

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def apn_module(self, x):        def right(x):            x = layers.AveragePooling2D()(x)            x = layers.Conv2D(self.classes, kernel_size=1, padding='same')(x)            x = layers.BatchNormalization()(x)            x = layers.Activation('relu')(x)            x = layers.UpSampling2D(interpolation='bilinear')(x)            return x        def conv(x, filters, kernel_size, stride):            x = layers.Conv2D(filters, kernel_size=kernel_size, strides=(stride, stride), padding='same')(x)            x = layers.BatchNormalization()(x)            x = layers.Activation('relu')(x)            return x        x_7 = conv(x, int(x.shape[-1]), 7, stride=2)        x_5 = conv(x_7, int(x.shape[-1]), 5, stride=2)        x_3 = conv(x_5, int(x.shape[-1]), 3, stride=2)        x_3_1 = conv(x_3, self.classes, 3, stride=1)        x_3_1_up = layers.UpSampling2D(interpolation='bilinear')(x_3_1)        x_5_1 = conv(x_5, self.classes, 5, stride=1)        x_3_5 = layers.add([x_5_1, x_3_1_up])        x_3_5_up = layers.UpSampling2D(interpolation='bilinear')(x_3_5)        x_7_1 = conv(x_7, self.classes, 3, stride=1)        x_3_5_7 = layers.add([x_7_1, x_3_5_up])        x_3_5_7_up = layers.UpSampling2D(interpolation='bilinear')(x_3_5_7)        x_middle = conv(x, self.classes, 1, stride=1)        x_middle = layers.multiply([x_3_5_7_up, x_middle])        x_right = right(x)        x_middle = layers.add([x_middle, x_right])        return x_middle 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:37,代码来源:lednet.py


示例16: decoder

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def decoder(self, x):        x = self.apn_module(x)        x = layers.UpSampling2D(size=8, interpolation='bilinear')(x)        x = layers.Conv2D(self.classes, kernel_size=3, padding='same')(x)        x = layers.BatchNormalization()(x)        x = layers.Activation('softmax')(x)        return x 
开发者ID:JACKYLUO1991,项目名称:Face-skin-hair-segmentaiton-and-skin-color-evaluation,代码行数:9,代码来源:lednet.py


示例17: conv2d_bn

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def conv2d_bn(x,              filters,              kernel_size,              strides=1,              padding='same',              activation='relu',              use_bias=False,              name=None):    """Utility function to apply conv + BN.    # Arguments        x: input tensor.        filters: filters in `Conv2D`.        kernel_size: kernel size as in `Conv2D`.        padding: padding mode in `Conv2D`.        activation: activation in `Conv2D`.        strides: strides in `Conv2D`.        name: name of the ops; will become `name + '_ac'` for the activation            and `name + '_bn'` for the batch norm layer.    # Returns        Output tensor after applying `Conv2D` and `BatchNormalization`.    """    x = Conv2D(filters,               kernel_size,               strides=strides,               padding=padding,               use_bias=use_bias,               name=name)(x)    if not use_bias:        bn_axis = 1 if K.image_data_format() == 'channels_first' else 3        bn_name = None if name is None else name + '_bn'        x = BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(x)    if activation is not None:        ac_name = None if name is None else name + '_ac'        x = Activation(activation, name=ac_name)(x)    return x 
开发者ID:killthekitten,项目名称:kaggle-carvana-2017,代码行数:39,代码来源:inception_resnet_v2.py


示例18: identity_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def identity_block(input_tensor, kernel_size, filters, stage, block):    """The identity block is the block that has no conv layer at shortcut.    # Arguments        input_tensor: input tensor        kernel_size: default 3, the kernel size of middle conv layer at main path        filters: list of integers, the filters of 3 conv layer at main path        stage: integer, current stage label, used for generating layer names        block: 'a','b'keras.., current block label, used for generating layer names    # Returns        Output tensor for the block.    """    filters1, filters2, filters3 = filters    if K.image_data_format() == 'channels_last':        bn_axis = 3    else:        bn_axis = 1    conv_name_base = 'res' + str(stage) + block + '_branch'    bn_name_base = 'bn' + str(stage) + block + '_branch'    x = Conv2D(filters1, (1, 1), name=conv_name_base + '2a')(input_tensor)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)    x = Activation('relu')(x)    x = Conv2D(filters2, kernel_size,               padding='same', name=conv_name_base + '2b')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)    x = Activation('relu')(x)    x = Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x)    x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)    x = layers.add([x, input_tensor])    x = Activation('relu')(x)    return x 
开发者ID:killthekitten,项目名称:kaggle-carvana-2017,代码行数:38,代码来源:resnet50_fixed.py


示例19: build

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def build(self, input_shape):        assert len(input_shape) == 4        self.conv1 = Conv2D(filters=self.filters * self.dim_capsule,                             kernel_size=self.kernel_size,                             strides=self.strides,                             padding=self.padding,                             name='primarycap_conv2d') 
开发者ID:l11x0m7,项目名称:CapsNet,代码行数:9,代码来源:capsule.py


示例20: CapsuleNet

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def CapsuleNet(input_shape, n_class, num_routing):    """    The whole capsule network for MNIST recognition.    """    # (None, H, W, C)    x = Input(input_shape)    conv1 = Conv2D(filters=256, kernel_size=9, padding='valid', activation='relu', name='init_conv')(x)    # (None, num_capsules, capsule_dim)    prim_caps = PrimaryCapsules(filters=32, kernel_size=9, dim_capsule=8, padding='valid', strides=(2, 2))(conv1)    # (None, n_class, dim_vector)    digit_caps = DigiCaps(num_capsule=n_class, dim_capsule=16,             num_routing=num_routing, name='digitcaps')(prim_caps)    # (None, n_class)    pred = Length(name='out_caps')(digit_caps)    # (None, n_class)    y = Input(shape=(n_class, ))    # (None, n_class * dim_vector)    masked = Mask()([digit_caps, y])      x_recon = layers.Dense(512, activation='relu')(masked)    x_recon = layers.Dense(1024, activation='relu')(x_recon)    x_recon = layers.Dense(784, activation='sigmoid')(x_recon)    x_recon = layers.Reshape(target_shape=[28, 28, 1], name='out_recon')(x_recon)    # two-input-two-output keras Model    return Model([x, y], [pred, x_recon]) 
开发者ID:l11x0m7,项目名称:CapsNet,代码行数:33,代码来源:capsule.py


示例21: load_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def load_model():    from keras.models import Model    from keras.layers import Input, Dense, Dropout, Flatten, Conv2D, MaxPooling2D        tensor_in = Input((60, 200, 3))    out = tensor_in    out = Conv2D(filters=32, kernel_size=(3, 3), padding='same', activation='relu')(out)    out = Conv2D(filters=32, kernel_size=(3, 3), activation='relu')(out)    out = MaxPooling2D(pool_size=(2, 2))(out)    out = Conv2D(filters=64, kernel_size=(3, 3), padding='same', activation='relu')(out)    out = Conv2D(filters=64, kernel_size=(3, 3), activation='relu')(out)    out = MaxPooling2D(pool_size=(2, 2))(out)    out = Conv2D(filters=128, kernel_size=(3, 3), padding='same', activation='relu')(out)    out = Conv2D(filters=128, kernel_size=(3, 3), activation='relu')(out)    out = MaxPooling2D(pool_size=(2, 2))(out)    out = Conv2D(filters=256, kernel_size=(3, 3), activation='relu')(out)    out = MaxPooling2D(pool_size=(2, 2))(out)    out = Flatten()(out)    out = Dropout(0.5)(out)    out = [Dense(37, name='digit1', activation='softmax')(out),/        Dense(37, name='digit2', activation='softmax')(out),/        Dense(37, name='digit3', activation='softmax')(out),/        Dense(37, name='digit4', activation='softmax')(out),/        Dense(37, name='digit5', activation='softmax')(out),/        Dense(37, name='digit6', activation='softmax')(out)]        model = Model(inputs=tensor_in, outputs=out)        # Define the optimizer    model.compile(loss='categorical_crossentropy', optimizer='Adamax', metrics=['accuracy'])    if 'Windows' in platform.platform():        model.load_weights('{}//cnn_weight//verificatioin_code.h5'.format(PATH))     else:        model.load_weights('{}/cnn_weight/verificatioin_code.h5'.format(PATH))         return model 
开发者ID:linsamtw,项目名称:TaiwanTrainVerificationCode2text,代码行数:39,代码来源:load_model.py


示例22: generator

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def generator(self):        if self.G:            return self.G        #Inputs        inp = Input(shape = [latent_size])        #Latent        #Actual Model        x = Dense(4*4*16*cha, kernel_initializer = 'he_normal')(inp)        x = Reshape([4, 4, 16*cha])(x)        x = g_block(x, 16 * cha, u = False)  #4        x = g_block(x, 8 * cha)  #8        x = g_block(x, 4 * cha)  #16        x = g_block(x, 3 * cha)   #32        x = g_block(x, 2 * cha)   #64        x = g_block(x, 1 * cha)   #128        x = Conv2D(filters = 3, kernel_size = 1, activation = 'sigmoid', padding = 'same', kernel_initializer = 'he_normal')(x)        self.G = Model(inputs = inp, outputs = x)        return self.G 
开发者ID:manicman1999,项目名称:Keras-BiGAN,代码行数:28,代码来源:bigan.py


示例23: identity_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def identity_block(input_tensor, kernel_size, filters, stage, block,                   use_bias=True, train_bn=True):    """The identity_block is the block that has no conv layer at shortcut    # Arguments        input_tensor: input tensor        kernel_size: default 3, the kernel size of middle conv layer at main path        filters: list of integers, the nb_filters of 3 conv layer at main path        stage: integer, current stage label, used for generating layer names        block: 'a','b'..., current block label, used for generating layer names        use_bias: Boolean. To use or not use a bias in conv layers.        train_bn: Boolean. Train or freeze Batch Norm layers    """    nb_filter1, nb_filter2, nb_filter3 = filters    conv_name_base = 'res' + str(stage) + block + '_branch'    bn_name_base = 'bn' + str(stage) + block + '_branch'    x = KL.Conv2D(nb_filter1, (1, 1), name=conv_name_base + '2a',                  use_bias=use_bias)(input_tensor)    x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn)    x = KL.Activation('relu')(x)    x = KL.Conv2D(nb_filter2, (kernel_size, kernel_size), padding='same',                  name=conv_name_base + '2b', use_bias=use_bias)(x)    x = BatchNorm(name=bn_name_base + '2b')(x, training=train_bn)    x = KL.Activation('relu')(x)    x = KL.Conv2D(nb_filter3, (1, 1), name=conv_name_base + '2c',                  use_bias=use_bias)(x)    x = BatchNorm(name=bn_name_base + '2c')(x, training=train_bn)    x = KL.Add()([x, input_tensor])    x = KL.Activation('relu', name='res' + str(stage) + block + '_out')(x)    return x 
开发者ID:dataiku,项目名称:dataiku-contrib,代码行数:35,代码来源:model.py


示例24: conv_block

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def conv_block(input_tensor, kernel_size, filters, stage, block,               strides=(2, 2), use_bias=True, train_bn=True):    """conv_block is the block that has a conv layer at shortcut    # Arguments        input_tensor: input tensor        kernel_size: default 3, the kernel size of middle conv layer at main path        filters: list of integers, the nb_filters of 3 conv layer at main path        stage: integer, current stage label, used for generating layer names        block: 'a','b'..., current block label, used for generating layer names        use_bias: Boolean. To use or not use a bias in conv layers.        train_bn: Boolean. Train or freeze Batch Norm layers    Note that from stage 3, the first conv layer at main path is with subsample=(2,2)    And the shortcut should have subsample=(2,2) as well    """    nb_filter1, nb_filter2, nb_filter3 = filters    conv_name_base = 'res' + str(stage) + block + '_branch'    bn_name_base = 'bn' + str(stage) + block + '_branch'    x = KL.Conv2D(nb_filter1, (1, 1), strides=strides,                  name=conv_name_base + '2a', use_bias=use_bias)(input_tensor)    x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn)    x = KL.Activation('relu')(x)    x = KL.Conv2D(nb_filter2, (kernel_size, kernel_size), padding='same',                  name=conv_name_base + '2b', use_bias=use_bias)(x)    x = BatchNorm(name=bn_name_base + '2b')(x, training=train_bn)    x = KL.Activation('relu')(x)    x = KL.Conv2D(nb_filter3, (1, 1), name=conv_name_base +                  '2c', use_bias=use_bias)(x)    x = BatchNorm(name=bn_name_base + '2c')(x, training=train_bn)    shortcut = KL.Conv2D(nb_filter3, (1, 1), strides=strides,                         name=conv_name_base + '1', use_bias=use_bias)(input_tensor)    shortcut = BatchNorm(name=bn_name_base + '1')(shortcut, training=train_bn)    x = KL.Add()([x, shortcut])    x = KL.Activation('relu', name='res' + str(stage) + block + '_out')(x)    return x 
开发者ID:dataiku,项目名称:dataiku-contrib,代码行数:41,代码来源:model.py


示例25: resnet_graph

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def resnet_graph(input_image, architecture, stage5=False, train_bn=True):    """Build a ResNet graph.        architecture: Can be resnet50 or resnet101        stage5: Boolean. If False, stage5 of the network is not created        train_bn: Boolean. Train or freeze Batch Norm layers    """    assert architecture in ["resnet50", "resnet101"]    # Stage 1    x = KL.ZeroPadding2D((3, 3))(input_image)    x = KL.Conv2D(64, (7, 7), strides=(2, 2), name='conv1', use_bias=True)(x)    x = BatchNorm(name='bn_conv1')(x, training=train_bn)    x = KL.Activation('relu')(x)    C1 = x = KL.MaxPooling2D((3, 3), strides=(2, 2), padding="same")(x)    # Stage 2    x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1), train_bn=train_bn)    x = identity_block(x, 3, [64, 64, 256], stage=2, block='b', train_bn=train_bn)    C2 = x = identity_block(x, 3, [64, 64, 256], stage=2, block='c', train_bn=train_bn)    # Stage 3    x = conv_block(x, 3, [128, 128, 512], stage=3, block='a', train_bn=train_bn)    x = identity_block(x, 3, [128, 128, 512], stage=3, block='b', train_bn=train_bn)    x = identity_block(x, 3, [128, 128, 512], stage=3, block='c', train_bn=train_bn)    C3 = x = identity_block(x, 3, [128, 128, 512], stage=3, block='d', train_bn=train_bn)    # Stage 4    x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a', train_bn=train_bn)    block_count = {"resnet50": 5, "resnet101": 22}[architecture]    for i in range(block_count):        x = identity_block(x, 3, [256, 256, 1024], stage=4, block=chr(98 + i), train_bn=train_bn)    C4 = x    # Stage 5    if stage5:        x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a', train_bn=train_bn)        x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b', train_bn=train_bn)        C5 = x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c', train_bn=train_bn)    else:        C5 = None    return [C1, C2, C3, C4, C5]#############################################################  Proposal Layer############################################################ 
开发者ID:dataiku,项目名称:dataiku-contrib,代码行数:43,代码来源:model.py


示例26: rpn_graph

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def rpn_graph(feature_map, anchors_per_location, anchor_stride):    """Builds the computation graph of Region Proposal Network.    feature_map: backbone features [batch, height, width, depth]    anchors_per_location: number of anchors per pixel in the feature map    anchor_stride: Controls the density of anchors. Typically 1 (anchors for                   every pixel in the feature map), or 2 (every other pixel).    Returns:        rpn_class_logits: [batch, H * W * anchors_per_location, 2] Anchor classifier logits (before softmax)        rpn_probs: [batch, H * W * anchors_per_location, 2] Anchor classifier probabilities.        rpn_bbox: [batch, H * W * anchors_per_location, (dy, dx, log(dh), log(dw))] Deltas to be                  applied to anchors.    """    # TODO: check if stride of 2 causes alignment issues if the feature map    # is not even.    # Shared convolutional base of the RPN    shared = KL.Conv2D(512, (3, 3), padding='same', activation='relu',                       strides=anchor_stride,                       name='rpn_conv_shared')(feature_map)    # Anchor Score. [batch, height, width, anchors per location * 2].    x = KL.Conv2D(2 * anchors_per_location, (1, 1), padding='valid',                  activation='linear', name='rpn_class_raw')(shared)    # Reshape to [batch, anchors, 2]    rpn_class_logits = KL.Lambda(        lambda t: tf.reshape(t, [tf.shape(t)[0], -1, 2]))(x)    # Softmax on last dimension of BG/FG.    rpn_probs = KL.Activation(        "softmax", name="rpn_class_xxx")(rpn_class_logits)    # Bounding box refinement. [batch, H, W, anchors per location * depth]    # where depth is [x, y, log(w), log(h)]    x = KL.Conv2D(anchors_per_location * 4, (1, 1), padding="valid",                  activation='linear', name='rpn_bbox_pred')(shared)    # Reshape to [batch, anchors, 4]    rpn_bbox = KL.Lambda(lambda t: tf.reshape(t, [tf.shape(t)[0], -1, 4]))(x)    return [rpn_class_logits, rpn_probs, rpn_bbox] 
开发者ID:dataiku,项目名称:dataiku-contrib,代码行数:42,代码来源:model.py


示例27: conv2d_with_relu

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def conv2d_with_relu(filters, kernel_size):    return hke.siso_keras_module_from_keras_layer_fn(lambda: Conv2D(        filters, kernel_size, padding='same', activation='relu', use_bias=False    ), {},                                                     name="Conv2D") 
开发者ID:negrinho,项目名称:deep_architect,代码行数:7,代码来源:main_genetic.py


示例28: conv2d

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def conv2d(h_filters, h_kernel_size, stride):    return hke.siso_keras_module_from_keras_layer_fn(        lambda filters, kernel_size: Conv2D(            filters, kernel_size, padding='same', strides=stride), {                "filters": h_filters,                "kernel_size": h_kernel_size            }) 
开发者ID:negrinho,项目名称:deep_architect,代码行数:9,代码来源:main_deep_architect.py


示例29: conv2d_with_relu

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def conv2d_with_relu(filters, kernel_size):    return hke.siso_keras_module_from_keras_layer_fn(        lambda: Conv2D(filters, kernel_size, padding='same', activation='relu', use_bias=False),        {}, name="Conv2D") 
开发者ID:negrinho,项目名称:deep_architect,代码行数:6,代码来源:main_hierarchical.py


示例30: init_model

# 需要导入模块: from keras import layers [as 别名]# 或者: from keras.layers import Conv2D [as 别名]def init_model(self, dl_rate):        x = Input(shape = (IMGWIDTH, IMGWIDTH, 3))                x1 = Conv2D(16, (3, 3), dilation_rate = dl_rate, strides = 1, padding='same', activation = 'relu')(x)        x1 = Conv2D(4, (1, 1), padding='same', activation = 'relu')(x1)        x1 = BatchNormalization()(x1)        x1 = MaxPooling2D(pool_size=(8, 8), padding='same')(x1)        y = Flatten()(x1)        y = Dropout(0.5)(y)        y = Dense(1, activation = 'sigmoid')(y)        return KerasModel(inputs = x, outputs = y) 
开发者ID:DariusAf,项目名称:MesoNet,代码行数:14,代码来源:classifiers.py


万事OK自学网:51自学网_软件自学网_CAD自学网自学excel、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。