您当前的位置:首页 > IT编程 > C++
| C语言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 学术与代码 | cnn卷积神经网络 | gnn | 图像修复 | Keras | 数据集 | Neo4j | 自然语言处理 | 深度学习 | 医学CAD | 医学影像 | 超参数 | pointnet | pytorch | 异常检测 | Transformers | 情感分类 | 知识图谱 |

自学教程:C++ BufferGetPageSize函数代码示例

51自学网 2021-06-01 19:55:52
  C++
这篇教程C++ BufferGetPageSize函数代码示例写得很实用,希望能帮到您。

本文整理汇总了C++中BufferGetPageSize函数的典型用法代码示例。如果您正苦于以下问题:C++ BufferGetPageSize函数的具体用法?C++ BufferGetPageSize怎么用?C++ BufferGetPageSize使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。

在下文中一共展示了BufferGetPageSize函数的30个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的C++代码示例。

示例1: _hash_getnewbuf

/* *	_hash_getnewbuf() -- Get a new page at the end of the index. * *		This has the same API as _hash_getinitbuf, except that we are adding *		a page to the index, and hence expect the page to be past the *		logical EOF.  (However, we have to support the case where it isn't, *		since a prior try might have crashed after extending the filesystem *		EOF but before updating the metapage to reflect the added page.) * *		It is caller's responsibility to ensure that only one process can *		extend the index at a time. */Buffer_hash_getnewbuf(Relation rel, BlockNumber blkno, ForkNumber forkNum){    BlockNumber nblocks = RelationGetNumberOfBlocksInFork(rel, forkNum);    Buffer		buf;    if (blkno == P_NEW)        elog(ERROR, "hash AM does not use P_NEW");    if (blkno > nblocks)        elog(ERROR, "access to noncontiguous page in hash index /"%s/"",             RelationGetRelationName(rel));    /* smgr insists we use P_NEW to extend the relation */    if (blkno == nblocks)    {        buf = ReadBufferExtended(rel, forkNum, P_NEW, RBM_NORMAL, NULL);        if (BufferGetBlockNumber(buf) != blkno)            elog(ERROR, "unexpected hash relation size: %u, should be %u",                 BufferGetBlockNumber(buf), blkno);    }    else        buf = ReadBufferExtended(rel, forkNum, blkno, RBM_ZERO, NULL);    LockBuffer(buf, HASH_WRITE);    /* ref count and lock type are correct */    /* initialize the page */    _hash_pageinit(BufferGetPage(buf), BufferGetPageSize(buf));    return buf;}
开发者ID:silam,项目名称:NewXamarin,代码行数:44,


示例2: _hash_initbuf

/* *	_hash_initbuf() -- Get and initialize a buffer by bucket number. */void_hash_initbuf(Buffer buf, uint32 max_bucket, uint32 num_bucket, uint32 flag,			  bool initpage){	HashPageOpaque pageopaque;	Page		page;	page = BufferGetPage(buf);	/* initialize the page */	if (initpage)		_hash_pageinit(page, BufferGetPageSize(buf));	pageopaque = (HashPageOpaque) PageGetSpecialPointer(page);	/*	 * Set hasho_prevblkno with current hashm_maxbucket. This value will be	 * used to validate cached HashMetaPageData. See	 * _hash_getbucketbuf_from_hashkey().	 */	pageopaque->hasho_prevblkno = max_bucket;	pageopaque->hasho_nextblkno = InvalidBlockNumber;	pageopaque->hasho_bucket = num_bucket;	pageopaque->hasho_flag = flag;	pageopaque->hasho_page_id = HASHO_PAGE_ID;}
开发者ID:gustavocolombelli,项目名称:postgres,代码行数:29,


示例3: _hash_initbitmapbuffer

/* *	_hash_initbitmapbuffer() * *	 Initialize a new bitmap page.  All bits in the new bitmap page are set to *	 "1", indicating "in use". */void_hash_initbitmapbuffer(Buffer buf, uint16 bmsize, bool initpage){	Page		pg;	HashPageOpaque op;	uint32	   *freep;	pg = BufferGetPage(buf);	/* initialize the page */	if (initpage)		_hash_pageinit(pg, BufferGetPageSize(buf));	/* initialize the page's special space */	op = (HashPageOpaque) PageGetSpecialPointer(pg);	op->hasho_prevblkno = InvalidBlockNumber;	op->hasho_nextblkno = InvalidBlockNumber;	op->hasho_bucket = -1;	op->hasho_flag = LH_BITMAP_PAGE;	op->hasho_page_id = HASHO_PAGE_ID;	/* set all of the bits to 1 */	freep = HashPageGetBitmap(pg);	MemSet(freep, 0xFF, bmsize);	/*	 * Set pd_lower just past the end of the bitmap page data.  We could even	 * set pd_lower equal to pd_upper, but this is more precise and makes the	 * page look compressible to xlog.c.	 */	((PageHeader) pg)->pd_lower = ((char *) freep + bmsize) - (char *) pg;}
开发者ID:bitnine-oss,项目名称:agens-graph,代码行数:38,


示例4: _bitmap_init_lovpage

/* * _bitmap_init_lovpage -- initialize a new LOV page. */void_bitmap_init_lovpage(Relation rel __attribute__((unused)), Buffer buf){	Page			page;	page = (Page) BufferGetPage(buf);	if(PageIsNew(page))		PageInit(page, BufferGetPageSize(buf), 0);}
开发者ID:AnLingm,项目名称:gpdb,代码行数:13,


示例5: SpGistInitMetabuffer

voidSpGistInitMetabuffer(Buffer b, Relation index){    SpGistMetaPageData   *metadata;    Page                page = BufferGetPage(b);    SpGistInitPage(page, SPGIST_META, BufferGetPageSize(b));    metadata = SpGistPageGetMeta(page);    memset(metadata, 0, sizeof(SpGistMetaPageData));    metadata->magickNumber = SPGIST_MAGICK_NUMBER;}
开发者ID:jasonhubs,项目名称:sp-gist,代码行数:10,


示例6: _hash_addovflpage

/* *	_hash_addovflpage * *	Add an overflow page to the bucket whose last page is pointed to by 'buf'. * *	On entry, the caller must hold a pin but no lock on 'buf'.	The pin is *	dropped before exiting (we assume the caller is not interested in 'buf' *	anymore).  The returned overflow page will be pinned and write-locked; *	it is guaranteed to be empty. * *	The caller must hold a pin, but no lock, on the metapage buffer. *	That buffer is returned in the same state. * *	The caller must hold at least share lock on the bucket, to ensure that *	no one else tries to compact the bucket meanwhile.	This guarantees that *	'buf' won't stop being part of the bucket while it's unlocked. * * NB: since this could be executed concurrently by multiple processes, * one should not assume that the returned overflow page will be the * immediate successor of the originally passed 'buf'.	Additional overflow * pages might have been added to the bucket chain in between. */Buffer_hash_addovflpage(Relation rel, Buffer metabuf, Buffer buf){	Buffer		ovflbuf;	Page		page;	Page		ovflpage;	HashPageOpaque pageopaque;	HashPageOpaque ovflopaque;	/* allocate and lock an empty overflow page */	ovflbuf = _hash_getovflpage(rel, metabuf);	ovflpage = BufferGetPage(ovflbuf);	/*	 * Write-lock the tail page.  It is okay to hold two buffer locks here	 * since there cannot be anyone else contending for access to ovflbuf.	 */	_hash_chgbufaccess(rel, buf, HASH_NOLOCK, HASH_WRITE);	/* loop to find current tail page, in case someone else inserted too */	for (;;)	{		BlockNumber nextblkno;		_hash_checkpage(rel, buf, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE);		page = BufferGetPage(buf);		pageopaque = (HashPageOpaque) PageGetSpecialPointer(page);		nextblkno = pageopaque->hasho_nextblkno;		if (!BlockNumberIsValid(nextblkno))			break;		/* we assume we do not need to write the unmodified page */		_hash_relbuf(rel, buf);		buf = _hash_getbuf(rel, nextblkno, HASH_WRITE);	}	/* now that we have correct backlink, initialize new overflow page */	_hash_pageinit(ovflpage, BufferGetPageSize(ovflbuf));	ovflopaque = (HashPageOpaque) PageGetSpecialPointer(ovflpage);	ovflopaque->hasho_prevblkno = BufferGetBlockNumber(buf);	ovflopaque->hasho_nextblkno = InvalidBlockNumber;	ovflopaque->hasho_bucket = pageopaque->hasho_bucket;	ovflopaque->hasho_flag = LH_OVERFLOW_PAGE;	ovflopaque->hasho_filler = HASHO_FILL;	MarkBufferDirty(ovflbuf);	/* logically chain overflow page to previous page */	pageopaque->hasho_nextblkno = BufferGetBlockNumber(ovflbuf);	_hash_wrtbuf(rel, buf);	return ovflbuf;}
开发者ID:shubham2094,项目名称:postgresql_8.2,代码行数:77,


示例7: _hash_initbitmap

/* *	_hash_initbitmap() * *	 Initialize a new bitmap page.	The metapage has a write-lock upon *	 entering the function, and must be written by caller after return. * * 'blkno' is the block number of the new bitmap page. * * All bits in the new bitmap page are set to "1", indicating "in use". */void_hash_initbitmap(Relation rel, HashMetaPage metap, BlockNumber blkno){	Buffer		buf;	Page		pg;	HashPageOpaque op;	uint32	   *freep;	/*	 * It is okay to write-lock the new bitmap page while holding metapage	 * write lock, because no one else could be contending for the new page.	 * Also, the metapage lock makes it safe to extend the index using P_NEW,	 * which we want to do to ensure the smgr's idea of the relation size	 * stays in step with ours.	 *	 * There is some loss of concurrency in possibly doing I/O for the new	 * page while holding the metapage lock, but this path is taken so seldom	 * that it's not worth worrying about.	 */	buf = _hash_getbuf(rel, P_NEW, HASH_WRITE);	if (BufferGetBlockNumber(buf) != blkno)		elog(ERROR, "unexpected hash relation size: %u, should be %u",			 BufferGetBlockNumber(buf), blkno);	pg = BufferGetPage(buf);	/* initialize the page */	_hash_pageinit(pg, BufferGetPageSize(buf));	op = (HashPageOpaque) PageGetSpecialPointer(pg);	op->hasho_prevblkno = InvalidBlockNumber;	op->hasho_nextblkno = InvalidBlockNumber;	op->hasho_bucket = -1;	op->hasho_flag = LH_BITMAP_PAGE;	op->hasho_filler = HASHO_FILL;	/* set all of the bits to 1 */	freep = HashPageGetBitmap(pg);	MemSet(freep, 0xFF, BMPGSZ_BYTE(metap));	/* write out the new bitmap page (releasing write lock and pin) */	_hash_wrtbuf(rel, buf);	/* add the new bitmap page to the metapage's list of bitmaps */	/* metapage already has a write lock */	if (metap->hashm_nmaps >= HASH_MAX_BITMAPS)		ereport(ERROR,				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),				 errmsg("out of overflow pages in hash index /"%s/"",						RelationGetRelationName(rel))));	metap->hashm_mapp[metap->hashm_nmaps] = blkno;	metap->hashm_nmaps++;}
开发者ID:shubham2094,项目名称:postgresql_8.2,代码行数:64,


示例8: BloomInitMetabuffer

voidBloomInitMetabuffer(Buffer b, Relation index){	BloomMetaPageData	*metadata;	Page				page = BufferGetPage(b);	BloomInitPage(page, BLOOM_META, BufferGetPageSize(b));	metadata = BloomPageGetMeta(page);	memset(metadata, 0, sizeof(BloomMetaPageData));	metadata->magickNumber = BLOOM_MAGICK_NUMBER;	metadata->opts = *makeDefaultBloomOptions((BloomOptions*)index->rd_options);}
开发者ID:awakmu,项目名称:pgbloom,代码行数:12,


示例9: ginRedoDeleteListPages

static voidginRedoDeleteListPages(XLogReaderState *record){	XLogRecPtr	lsn = record->EndRecPtr;	ginxlogDeleteListPages *data = (ginxlogDeleteListPages *) XLogRecGetData(record);	Buffer		metabuffer;	Page		metapage;	int			i;	metabuffer = XLogInitBufferForRedo(record, 0);	Assert(BufferGetBlockNumber(metabuffer) == GIN_METAPAGE_BLKNO);	metapage = BufferGetPage(metabuffer);	GinInitPage(metapage, GIN_META, BufferGetPageSize(metabuffer));	memcpy(GinPageGetMeta(metapage), &data->metadata, sizeof(GinMetaPageData));	PageSetLSN(metapage, lsn);	MarkBufferDirty(metabuffer);	/*	 * In normal operation, shiftList() takes exclusive lock on all the	 * pages-to-be-deleted simultaneously.  During replay, however, it should	 * be all right to lock them one at a time.  This is dependent on the fact	 * that we are deleting pages from the head of the list, and that readers	 * share-lock the next page before releasing the one they are on. So we	 * cannot get past a reader that is on, or due to visit, any page we are	 * going to delete.  New incoming readers will block behind our metapage	 * lock and then see a fully updated page list.	 *	 * No full-page images are taken of the deleted pages. Instead, they are	 * re-initialized as empty, deleted pages. Their right-links don't need to	 * be preserved, because no new___ readers can see the pages, as explained	 * above.	 */	for (i = 0; i < data->ndeleted; i++)	{		Buffer		buffer;		Page		page;		buffer = XLogInitBufferForRedo(record, i + 1);		page = BufferGetPage(buffer);		GinInitBuffer(buffer, GIN_DELETED);		PageSetLSN(page, lsn);		MarkBufferDirty(buffer);		UnlockReleaseBuffer(buffer);	}	UnlockReleaseBuffer(metabuffer);}
开发者ID:EccentricLoggers,项目名称:peloton,代码行数:50,


示例10: GinInitMetabuffer

voidGinInitMetabuffer(Buffer b){	GinMetaPageData *metadata;	Page		page = BufferGetPage(b);	GinInitPage(page, GIN_META, BufferGetPageSize(b));	metadata = GinPageGetMeta(page);	metadata->head = metadata->tail = InvalidBlockNumber;	metadata->tailFreeSize = 0;	metadata->nPendingPages = 0;	metadata->nPendingHeapTuples = 0;}
开发者ID:Joe-xXx,项目名称:postgres-old-soon-decommissioned,代码行数:15,


示例11: RTInitBuffer

static voidRTInitBuffer(Buffer b, uint32 f){	RTreePageOpaque opaque;	Page		page;	Size		pageSize;	pageSize = BufferGetPageSize(b);	page = BufferGetPage(b);	PageInit(page, pageSize, sizeof(RTreePageOpaqueData));	opaque = (RTreePageOpaque) PageGetSpecialPointer(page);	opaque->flags = f;}
开发者ID:CraigBryan,项目名称:PostgresqlFun,代码行数:16,


示例12: GISTInitBuffer

/* * Initialize a new index page */voidGISTInitBuffer(Buffer b, uint32 f){	GISTPageOpaque opaque;	Page		page;	Size		pageSize;	pageSize = BufferGetPageSize(b);	page = BufferGetPage(b);	PageInit(page, pageSize, sizeof(GISTPageOpaqueData));	opaque = GistPageGetOpaque(page);	opaque->flags = f;	opaque->rightlink = InvalidBlockNumber;	/* page was already zeroed by PageInit, so this is not needed: */	/* memset(&(opaque->nsn), 0, sizeof(GistNSN)); */}
开发者ID:asurinsaka,项目名称:postgresql-8.2.19-lru,代码行数:20,


示例13: _bitmap_init_bitmappage

/* * _bitmap_init_bitmappage() -- initialize a new page to store the bitmap. */void_bitmap_init_bitmappage(Relation rel __attribute__((unused)), Buffer buf){	Page			page;	BMBitmapOpaque	opaque;	page = (Page) BufferGetPage(buf);	if(PageIsNew(page))		PageInit(page, BufferGetPageSize(buf), sizeof(BMBitmapOpaqueData));	/* even though page may not be new, reset all values */	opaque = (BMBitmapOpaque) PageGetSpecialPointer(page);	opaque->bm_hrl_words_used = 0;	opaque->bm_bitmap_next = InvalidBlockNumber;	opaque->bm_last_tid_location = 0;}
开发者ID:AnLingm,项目名称:gpdb,代码行数:20,


示例14: _hash_getinitbuf

/* *	_hash_getinitbuf() -- Get and initialize a buffer by block number. * *		This must be used only to fetch pages that are known to be before *		the index's filesystem EOF, but are to be filled from scratch. *		_hash_pageinit() is applied automatically.  Otherwise it has *		effects similar to _hash_getbuf() with access = HASH_WRITE. * *		When this routine returns, a write lock is set on the *		requested buffer and its reference count has been incremented *		(ie, the buffer is "locked and pinned"). * *		P_NEW is disallowed because this routine can only be used *		to access pages that are known to be before the filesystem EOF. *		Extending the index should be done with _hash_getnewbuf. */Buffer_hash_getinitbuf(Relation rel, BlockNumber blkno){	Buffer		buf;	if (blkno == P_NEW)		elog(ERROR, "hash AM does not use P_NEW");	buf = ReadBufferExtended(rel, MAIN_FORKNUM, blkno, RBM_ZERO_AND_LOCK,							 NULL);	/* ref count and lock type are correct */	/* initialize the page */	_hash_pageinit(BufferGetPage(buf), BufferGetPageSize(buf));	return buf;}
开发者ID:johto,项目名称:postgres,代码行数:34,


示例15: _bt_restore_meta

static void_bt_restore_meta(XLogReaderState *record, uint8 block_id){	XLogRecPtr	lsn = record->EndRecPtr;	Buffer		metabuf;	Page		metapg;	BTMetaPageData *md;	BTPageOpaque pageop;	xl_btree_metadata *xlrec;	char	   *ptr;	Size		len;	metabuf = XLogInitBufferForRedo(record, block_id);	ptr = XLogRecGetBlockData(record, block_id, &len);	Assert(len == sizeof(xl_btree_metadata));	Assert(BufferGetBlockNumber(metabuf) == BTREE_METAPAGE);	xlrec = (xl_btree_metadata *) ptr;	metapg = BufferGetPage(metabuf);	_bt_pageinit(metapg, BufferGetPageSize(metabuf));	md = BTPageGetMeta(metapg);	md->btm_magic = BTREE_MAGIC;	md->btm_version = BTREE_VERSION;	md->btm_root = xlrec->root;	md->btm_level = xlrec->level;	md->btm_fastroot = xlrec->fastroot;	md->btm_fastlevel = xlrec->fastlevel;	pageop = (BTPageOpaque) PageGetSpecialPointer(metapg);	pageop->btpo_flags = BTP_META;	/*	 * Set pd_lower just past the end of the metadata.  This is not essential	 * but it makes the page look compressible to xlog.c.	 */	((PageHeader) metapg)->pd_lower =		((char *) md + sizeof(BTMetaPageData)) - (char *) metapg;	PageSetLSN(metapg, lsn);	MarkBufferDirty(metabuf);	UnlockReleaseBuffer(metabuf);}
开发者ID:JiannengSun,项目名称:postgres,代码行数:44,


示例16: GinInitMetabuffer

voidGinInitMetabuffer(Buffer b){	GinMetaPageData *metadata;	Page		page = BufferGetPage(b);	GinInitPage(page, GIN_META, BufferGetPageSize(b));	metadata = GinPageGetMeta(page);	metadata->head = metadata->tail = InvalidBlockNumber;	metadata->tailFreeSize = 0;	metadata->nPendingPages = 0;	metadata->nPendingHeapTuples = 0;	metadata->nTotalPages = 0;	metadata->nEntryPages = 0;	metadata->nDataPages = 0;	metadata->nEntries = 0;	metadata->ginVersion = GIN_CURRENT_VERSION;}
开发者ID:LittleForker,项目名称:postgres,代码行数:20,


示例17: btree_xlog_newroot

static voidbtree_xlog_newroot(XLogReaderState *record){	XLogRecPtr	lsn = record->EndRecPtr;	xl_btree_newroot *xlrec = (xl_btree_newroot *) XLogRecGetData(record);	Buffer		buffer;	Page		page;	BTPageOpaque pageop;	char	   *ptr;	Size		len;	buffer = XLogInitBufferForRedo(record, 0);	page = (Page) BufferGetPage(buffer);	_bt_pageinit(page, BufferGetPageSize(buffer));	pageop = (BTPageOpaque) PageGetSpecialPointer(page);	pageop->btpo_flags = BTP_ROOT;	pageop->btpo_prev = pageop->btpo_next = P_NONE;	pageop->btpo.level = xlrec->level;	if (xlrec->level == 0)		pageop->btpo_flags |= BTP_LEAF;	pageop->btpo_cycleid = 0;	if (xlrec->level > 0)	{		ptr = XLogRecGetBlockData(record, 0, &len);		_bt_restore_page(page, ptr, len);		/* Clear the incomplete-split flag in left child */		_bt_clear_incomplete_split(record, 1);	}	PageSetLSN(page, lsn);	MarkBufferDirty(buffer);	UnlockReleaseBuffer(buffer);	_bt_restore_meta(record, 2);}
开发者ID:JiannengSun,项目名称:postgres,代码行数:39,


示例18: _hash_metapinit

/* *	_hash_metapinit() -- Initialize the metadata page of a hash index, *				the two buckets that we begin with and the initial *				bitmap page. * * We are fairly cavalier about locking here, since we know that no one else * could be accessing this index.  In particular the rule about not holding * multiple buffer locks is ignored. */void_hash_metapinit(Relation rel){	HashMetaPage metap;	HashPageOpaque pageopaque;	Buffer		metabuf;	Buffer		buf;	Page		pg;	int32		data_width;	int32		item_width;	int32		ffactor;	uint16		i;	/* safety check */	if (RelationGetNumberOfBlocks(rel) != 0)		elog(ERROR, "cannot initialize non-empty hash index /"%s/"",			 RelationGetRelationName(rel));	/*	 * Determine the target fill factor (tuples per bucket) for this index.	 * The idea is to make the fill factor correspond to pages about 3/4ths	 * full.  We can compute it exactly if the index datatype is fixed-width,	 * but for var-width there's some guessing involved.	 */	data_width = get_typavgwidth(RelationGetDescr(rel)->attrs[0]->atttypid,								 RelationGetDescr(rel)->attrs[0]->atttypmod);	item_width = MAXALIGN(sizeof(HashItemData)) + MAXALIGN(data_width) +		sizeof(ItemIdData);		/* include the line pointer */	ffactor = (BLCKSZ * 3 / 4) / item_width;	/* keep to a sane range */	if (ffactor < 10)		ffactor = 10;	metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_WRITE);	pg = BufferGetPage(metabuf);	_hash_pageinit(pg, BufferGetPageSize(metabuf));	pageopaque = (HashPageOpaque) PageGetSpecialPointer(pg);	pageopaque->hasho_prevblkno = InvalidBlockNumber;	pageopaque->hasho_nextblkno = InvalidBlockNumber;	pageopaque->hasho_bucket = -1;	pageopaque->hasho_flag = LH_META_PAGE;	pageopaque->hasho_filler = HASHO_FILL;	metap = (HashMetaPage) pg;	metap->hashm_magic = HASH_MAGIC;	metap->hashm_version = HASH_VERSION;	metap->hashm_ntuples = 0;	metap->hashm_nmaps = 0;	metap->hashm_ffactor = ffactor;	metap->hashm_bsize = BufferGetPageSize(metabuf);	/* find largest bitmap array size that will fit in page size */	for (i = _hash_log2(metap->hashm_bsize); i > 0; --i)	{		if ((1 << i) <= (metap->hashm_bsize -						 (MAXALIGN(sizeof(PageHeaderData)) +						  MAXALIGN(sizeof(HashPageOpaqueData)))))			break;	}	Assert(i > 0);	metap->hashm_bmsize = 1 << i;	metap->hashm_bmshift = i + BYTE_TO_BIT;	Assert((1 << BMPG_SHIFT(metap)) == (BMPG_MASK(metap) + 1));	metap->hashm_procid = index_getprocid(rel, 1, HASHPROC);	/*	 * We initialize the index with two buckets, 0 and 1, occupying physical	 * blocks 1 and 2.  The first freespace bitmap page is in block 3.	 */	metap->hashm_maxbucket = metap->hashm_lowmask = 1;	/* nbuckets - 1 */	metap->hashm_highmask = 3;	/* (nbuckets << 1) - 1 */	MemSet((char *) metap->hashm_spares, 0, sizeof(metap->hashm_spares));	MemSet((char *) metap->hashm_mapp, 0, sizeof(metap->hashm_mapp));	metap->hashm_spares[1] = 1;	/* the first bitmap page is only spare */	metap->hashm_ovflpoint = 1;	metap->hashm_firstfree = 0;	/*	 * Initialize the first two buckets	 */	for (i = 0; i <= 1; i++)	{		buf = _hash_getbuf(rel, BUCKET_TO_BLKNO(metap, i), HASH_WRITE);		pg = BufferGetPage(buf);		_hash_pageinit(pg, BufferGetPageSize(buf));		pageopaque = (HashPageOpaque) PageGetSpecialPointer(pg);		pageopaque->hasho_prevblkno = InvalidBlockNumber;//.........这里部分代码省略.........
开发者ID:sunyangkobe,项目名称:cscd43,代码行数:101,


示例19: SpGistInitBuffer

voidSpGistInitBuffer(Buffer b, uint16 f){    SpGistInitPage(BufferGetPage(b), f, BufferGetPageSize(b));}
开发者ID:jasonhubs,项目名称:sp-gist,代码行数:5,


示例20: spgstat

Datumspgstat(PG_FUNCTION_ARGS){    text    	*name=PG_GETARG_TEXT_P(0);    char 		*relname=text_to_cstring(name);    RangeVar   	*relvar;    Relation    index;    List       	*relname_list;    Oid			relOid;    BlockNumber	blkno = SPGIST_HEAD_BLKNO;    BlockNumber	totalPages = 0,                innerPages = 0,                emptyPages = 0;    double		usedSpace = 0.0;    char		res[1024];    int			bufferSize = -1;    int64		innerTuples = 0,                leafTuples = 0;    relname_list = stringToQualifiedNameList(relname);    relvar = makeRangeVarFromNameList(relname_list);    relOid = RangeVarGetRelid(relvar, false);    index = index_open(relOid, AccessExclusiveLock);    if ( index->rd_am == NULL )        elog(ERROR, "Relation %s.%s is not an index",             get_namespace_name(RelationGetNamespace(index)),             RelationGetRelationName(index) );    totalPages = RelationGetNumberOfBlocks(index);    for(blkno=SPGIST_HEAD_BLKNO; blkno<totalPages; blkno++)    {        Buffer	buffer;        Page	page;        buffer = ReadBuffer(index, blkno);        LockBuffer(buffer, BUFFER_LOCK_SHARE);        page = BufferGetPage(buffer);        if (SpGistPageIsLeaf(page))        {            leafTuples += SpGistPageGetMaxOffset(page);        }        else        {            innerPages++;            innerTuples += SpGistPageGetMaxOffset(page);        }        if (bufferSize < 0)            bufferSize = BufferGetPageSize(buffer) - MAXALIGN(sizeof(SpGistPageOpaqueData)) -                         SizeOfPageHeaderData;        usedSpace += bufferSize - (PageGetFreeSpace(page) + sizeof(ItemIdData));        if (PageGetFreeSpace(page) + sizeof(ItemIdData) == bufferSize)            emptyPages++;        UnlockReleaseBuffer(buffer);    }    index_close(index, AccessExclusiveLock);    totalPages--; /* metapage */    snprintf(res, sizeof(res),             "totalPages:  %u/n"             "innerPages:  %u/n"             "leafPages:   %u/n"             "emptyPages:  %u/n"             "usedSpace:   %.2f kbytes/n"             "freeSpace:   %.2f kbytes/n"             "fillRatio:   %.2f%c/n"             "leafTuples:  %lld/n"             "innerTuples: %lld",             totalPages, innerPages, totalPages - innerPages, emptyPages,             usedSpace / 1024.0,             (( (double) bufferSize ) * ( (double) totalPages ) - usedSpace) / 1024,             100.0 * ( usedSpace / (( (double) bufferSize ) * ( (double) totalPages )) ),             '%',             leafTuples, innerTuples            );    PG_RETURN_TEXT_P(CStringGetTextDatum(res));}
开发者ID:jasonhubs,项目名称:sp-gist,代码行数:87,


示例21: btree_xlog_unlink_page

static voidbtree_xlog_unlink_page(uint8 info, XLogReaderState *record){	XLogRecPtr	lsn = record->EndRecPtr;	xl_btree_unlink_page *xlrec = (xl_btree_unlink_page *) XLogRecGetData(record);	BlockNumber leftsib;	BlockNumber rightsib;	Buffer		buffer;	Page		page;	BTPageOpaque pageop;	leftsib = xlrec->leftsib;	rightsib = xlrec->rightsib;	/*	 * In normal operation, we would lock all the pages this WAL record	 * touches before changing any of them.  In WAL replay, it should be okay	 * to lock just one page at a time, since no concurrent index updates can	 * be happening, and readers should not care whether they arrive at the	 * target page or not (since it's surely empty).	 */	/* Fix left-link of right sibling */	if (XLogReadBufferForRedo(record, 2, &buffer) == BLK_NEEDS_REDO)	{		page = (Page) BufferGetPage(buffer);		pageop = (BTPageOpaque) PageGetSpecialPointer(page);		pageop->btpo_prev = leftsib;		PageSetLSN(page, lsn);		MarkBufferDirty(buffer);	}	if (BufferIsValid(buffer))		UnlockReleaseBuffer(buffer);	/* Fix right-link of left sibling, if any */	if (leftsib != P_NONE)	{		if (XLogReadBufferForRedo(record, 1, &buffer) == BLK_NEEDS_REDO)		{			page = (Page) BufferGetPage(buffer);			pageop = (BTPageOpaque) PageGetSpecialPointer(page);			pageop->btpo_next = rightsib;			PageSetLSN(page, lsn);			MarkBufferDirty(buffer);		}		if (BufferIsValid(buffer))			UnlockReleaseBuffer(buffer);	}	/* Rewrite target page as empty deleted page */	buffer = XLogInitBufferForRedo(record, 0);	page = (Page) BufferGetPage(buffer);	_bt_pageinit(page, BufferGetPageSize(buffer));	pageop = (BTPageOpaque) PageGetSpecialPointer(page);	pageop->btpo_prev = leftsib;	pageop->btpo_next = rightsib;	pageop->btpo.xact = xlrec->btpo_xact;	pageop->btpo_flags = BTP_DELETED;	pageop->btpo_cycleid = 0;	PageSetLSN(page, lsn);	MarkBufferDirty(buffer);	UnlockReleaseBuffer(buffer);	/*	 * If we deleted a parent of the targeted leaf page, instead of the leaf	 * itself, update the leaf to point to the next remaining child in the	 * branch.	 */	if (XLogRecHasBlockRef(record, 3))	{		/*		 * There is no real data on the page, so we just re-create it from		 * scratch using the information from the WAL record.		 */		IndexTupleData trunctuple;		buffer = XLogInitBufferForRedo(record, 3);		page = (Page) BufferGetPage(buffer);		pageop = (BTPageOpaque) PageGetSpecialPointer(page);		_bt_pageinit(page, BufferGetPageSize(buffer));		pageop->btpo_flags = BTP_HALF_DEAD | BTP_LEAF;		pageop->btpo_prev = xlrec->leafleftsib;		pageop->btpo_next = xlrec->leafrightsib;		pageop->btpo.level = 0;		pageop->btpo_cycleid = 0;		/* Add a dummy hikey item */		MemSet(&trunctuple, 0, sizeof(IndexTupleData));		trunctuple.t_info = sizeof(IndexTupleData);		if (xlrec->topparent != InvalidBlockNumber)			ItemPointerSet(&trunctuple.t_tid, xlrec->topparent, P_HIKEY);		else			ItemPointerSetInvalid(&trunctuple.t_tid);		if (PageAddItem(page, (Item) &trunctuple, sizeof(IndexTupleData), P_HIKEY,//.........这里部分代码省略.........
开发者ID:JiannengSun,项目名称:postgres,代码行数:101,


示例22: _hash_freeovflpage

/* *	_hash_freeovflpage() - * *	Remove this overflow page from its bucket's chain, and mark the page as *	free.  On entry, ovflbuf is write-locked; it is released before exiting. * *	Since this function is invoked in VACUUM, we provide an access strategy *	parameter that controls fetches of the bucket pages. * *	Returns the block number of the page that followed the given page *	in the bucket, or InvalidBlockNumber if no following page. * *	NB: caller must not hold lock on metapage, nor on either page that's *	adjacent in the bucket chain.  The caller had better hold exclusive lock *	on the bucket, too. */BlockNumber_hash_freeovflpage(Relation rel, Buffer ovflbuf,                   BufferAccessStrategy bstrategy){    HashMetaPage metap;    Buffer		metabuf;    Buffer		mapbuf;    BlockNumber ovflblkno;    BlockNumber prevblkno;    BlockNumber blkno;    BlockNumber nextblkno;    HashPageOpaque ovflopaque;    Page		ovflpage;    Page		mappage;    uint32	   *freep;    uint32		ovflbitno;    int32		bitmappage,                bitmapbit;    Bucket bucket PG_USED_FOR_ASSERTS_ONLY;    /* Get information from the doomed page */    _hash_checkpage(rel, ovflbuf, LH_OVERFLOW_PAGE);    ovflblkno = BufferGetBlockNumber(ovflbuf);    ovflpage = BufferGetPage(ovflbuf);    ovflopaque = (HashPageOpaque) PageGetSpecialPointer(ovflpage);    nextblkno = ovflopaque->hasho_nextblkno;    prevblkno = ovflopaque->hasho_prevblkno;    bucket = ovflopaque->hasho_bucket;    /*     * Zero the page for debugging's sake; then write and release it. (Note:     * if we failed to zero the page here, we'd have problems with the Assert     * in _hash_pageinit() when the page is reused.)     */    MemSet(ovflpage, 0, BufferGetPageSize(ovflbuf));    _hash_wrtbuf(rel, ovflbuf);    /*     * Fix up the bucket chain.  this is a doubly-linked list, so we must fix     * up the bucket chain members behind and ahead of the overflow page being     * deleted.  No concurrency issues since we hold exclusive lock on the     * entire bucket.     */    if (BlockNumberIsValid(prevblkno))    {        Buffer		prevbuf = _hash_getbuf_with_strategy(rel,                              prevblkno,                              HASH_WRITE,                              LH_BUCKET_PAGE | LH_OVERFLOW_PAGE,                              bstrategy);        Page		prevpage = BufferGetPage(prevbuf);        HashPageOpaque prevopaque = (HashPageOpaque) PageGetSpecialPointer(prevpage);        Assert(prevopaque->hasho_bucket == bucket);        prevopaque->hasho_nextblkno = nextblkno;        _hash_wrtbuf(rel, prevbuf);    }    if (BlockNumberIsValid(nextblkno))    {        Buffer		nextbuf = _hash_getbuf_with_strategy(rel,                              nextblkno,                              HASH_WRITE,                              LH_OVERFLOW_PAGE,                              bstrategy);        Page		nextpage = BufferGetPage(nextbuf);        HashPageOpaque nextopaque = (HashPageOpaque) PageGetSpecialPointer(nextpage);        Assert(nextopaque->hasho_bucket == bucket);        nextopaque->hasho_prevblkno = prevblkno;        _hash_wrtbuf(rel, nextbuf);    }    /* Note: bstrategy is intentionally not used for metapage and bitmap */    /* Read the metapage so we can determine which bitmap page to use */    metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);    metap = HashPageGetMeta(BufferGetPage(metabuf));    /* Identify which bit to set */    ovflbitno = blkno_to_bitno(metap, ovflblkno);    bitmappage = ovflbitno >> BMPG_SHIFT(metap);    bitmapbit = ovflbitno & BMPG_MASK(metap);//.........这里部分代码省略.........
开发者ID:nathanwadhwani,项目名称:recdb-postgresql,代码行数:101,


示例23: btree_xlog_mark_page_halfdead

static voidbtree_xlog_mark_page_halfdead(uint8 info, XLogReaderState *record){	XLogRecPtr	lsn = record->EndRecPtr;	xl_btree_mark_page_halfdead *xlrec = (xl_btree_mark_page_halfdead *) XLogRecGetData(record);	Buffer		buffer;	Page		page;	BTPageOpaque pageop;	IndexTupleData trunctuple;	/*	 * In normal operation, we would lock all the pages this WAL record	 * touches before changing any of them.  In WAL replay, it should be okay	 * to lock just one page at a time, since no concurrent index updates can	 * be happening, and readers should not care whether they arrive at the	 * target page or not (since it's surely empty).	 */	/* parent page */	if (XLogReadBufferForRedo(record, 1, &buffer) == BLK_NEEDS_REDO)	{		OffsetNumber poffset;		ItemId		itemid;		IndexTuple	itup;		OffsetNumber nextoffset;		BlockNumber rightsib;		page = (Page) BufferGetPage(buffer);		pageop = (BTPageOpaque) PageGetSpecialPointer(page);		poffset = xlrec->poffset;		nextoffset = OffsetNumberNext(poffset);		itemid = PageGetItemId(page, nextoffset);		itup = (IndexTuple) PageGetItem(page, itemid);		rightsib = ItemPointerGetBlockNumber(&itup->t_tid);		itemid = PageGetItemId(page, poffset);		itup = (IndexTuple) PageGetItem(page, itemid);		ItemPointerSet(&(itup->t_tid), rightsib, P_HIKEY);		nextoffset = OffsetNumberNext(poffset);		PageIndexTupleDelete(page, nextoffset);		PageSetLSN(page, lsn);		MarkBufferDirty(buffer);	}	if (BufferIsValid(buffer))		UnlockReleaseBuffer(buffer);	/* Rewrite the leaf page as a halfdead page */	buffer = XLogInitBufferForRedo(record, 0);	page = (Page) BufferGetPage(buffer);	_bt_pageinit(page, BufferGetPageSize(buffer));	pageop = (BTPageOpaque) PageGetSpecialPointer(page);	pageop->btpo_prev = xlrec->leftblk;	pageop->btpo_next = xlrec->rightblk;	pageop->btpo.level = 0;	pageop->btpo_flags = BTP_HALF_DEAD | BTP_LEAF;	pageop->btpo_cycleid = 0;	/*	 * Construct a dummy hikey item that points to the next parent to be	 * deleted (if any).	 */	MemSet(&trunctuple, 0, sizeof(IndexTupleData));	trunctuple.t_info = sizeof(IndexTupleData);	if (xlrec->topparent != InvalidBlockNumber)		ItemPointerSet(&trunctuple.t_tid, xlrec->topparent, P_HIKEY);	else		ItemPointerSetInvalid(&trunctuple.t_tid);	if (PageAddItem(page, (Item) &trunctuple, sizeof(IndexTupleData), P_HIKEY,					false, false) == InvalidOffsetNumber)		elog(ERROR, "could not add dummy high key to half-dead page");	PageSetLSN(page, lsn);	MarkBufferDirty(buffer);	UnlockReleaseBuffer(buffer);}
开发者ID:JiannengSun,项目名称:postgres,代码行数:80,


示例24: btree_xlog_split

static voidbtree_xlog_split(bool onleft, bool isroot, XLogReaderState *record){	XLogRecPtr	lsn = record->EndRecPtr;	xl_btree_split *xlrec = (xl_btree_split *) XLogRecGetData(record);	bool		isleaf = (xlrec->level == 0);	Buffer		lbuf;	Buffer		rbuf;	Page		rpage;	BTPageOpaque ropaque;	char	   *datapos;	Size		datalen;	Item		left_hikey = NULL;	Size		left_hikeysz = 0;	BlockNumber leftsib;	BlockNumber rightsib;	BlockNumber rnext;	XLogRecGetBlockTag(record, 0, NULL, NULL, &leftsib);	XLogRecGetBlockTag(record, 1, NULL, NULL, &rightsib);	if (!XLogRecGetBlockTag(record, 2, NULL, NULL, &rnext))		rnext = P_NONE;	/*	 * Clear the incomplete split flag on the left sibling of the child page	 * this is a downlink for.  (Like in btree_xlog_insert, this can be done	 * before locking the other pages)	 */	if (!isleaf)		_bt_clear_incomplete_split(record, 3);	/* Reconstruct right (new) sibling page from scratch */	rbuf = XLogInitBufferForRedo(record, 1);	datapos = XLogRecGetBlockData(record, 1, &datalen);	rpage = (Page) BufferGetPage(rbuf);	_bt_pageinit(rpage, BufferGetPageSize(rbuf));	ropaque = (BTPageOpaque) PageGetSpecialPointer(rpage);	ropaque->btpo_prev = leftsib;	ropaque->btpo_next = rnext;	ropaque->btpo.level = xlrec->level;	ropaque->btpo_flags = isleaf ? BTP_LEAF : 0;	ropaque->btpo_cycleid = 0;	_bt_restore_page(rpage, datapos, datalen);	/*	 * On leaf level, the high key of the left page is equal to the first key	 * on the right page.	 */	if (isleaf)	{		ItemId		hiItemId = PageGetItemId(rpage, P_FIRSTDATAKEY(ropaque));		left_hikey = PageGetItem(rpage, hiItemId);		left_hikeysz = ItemIdGetLength(hiItemId);	}	PageSetLSN(rpage, lsn);	MarkBufferDirty(rbuf);	/* don't release the buffer yet; we touch right page's first item below */	/* Now reconstruct left (original) sibling page */	if (XLogReadBufferForRedo(record, 0, &lbuf) == BLK_NEEDS_REDO)	{		/*		 * To retain the same physical order of the tuples that they had, we		 * initialize a temporary empty page for the left page and add all the		 * items to that in item number order.  This mirrors how _bt_split()		 * works.  It's not strictly required to retain the same physical		 * order, as long as the items are in the correct item number order,		 * but it helps debugging.  See also _bt_restore_page(), which does		 * the same for the right page.		 */		Page		lpage = (Page) BufferGetPage(lbuf);		BTPageOpaque lopaque = (BTPageOpaque) PageGetSpecialPointer(lpage);		OffsetNumber off;		Item		newitem = NULL;		Size		newitemsz = 0;		Page		newlpage;		OffsetNumber leftoff;		datapos = XLogRecGetBlockData(record, 0, &datalen);		if (onleft)		{			newitem = (Item) datapos;			newitemsz = MAXALIGN(IndexTupleSize(newitem));			datapos += newitemsz;			datalen -= newitemsz;		}		/* Extract left hikey and its size (assuming 16-bit alignment) */		if (!isleaf)		{			left_hikey = (Item) datapos;			left_hikeysz = MAXALIGN(IndexTupleSize(left_hikey));			datapos += left_hikeysz;//.........这里部分代码省略.........
开发者ID:JiannengSun,项目名称:postgres,代码行数:101,


示例25: RelationGetBufferForTuple

//.........这里部分代码省略.........		 * code above.		 */		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);		if (otherBuffer == InvalidBuffer)			ReleaseBuffer(buffer);		else if (otherBlock != targetBlock)		{			LockBuffer(otherBuffer, BUFFER_LOCK_UNLOCK);			ReleaseBuffer(buffer);		}		/* Without FSM, always fall out of the loop and extend */		if (!use_fsm)			break;		/*		 * Update FSM as to condition of this page, and ask for another page		 * to try.		 */		targetBlock = RecordAndGetPageWithFreeSpace(relation,													targetBlock,													pageFreeSpace,													len + saveFreeSpace);	}	/*	 * Have to extend the relation.	 *	 * We have to use a lock to ensure no one else is extending the rel at the	 * same time, else we will both try to initialize the same new page.  We	 * can skip locking for new or temp relations, however, since no one else	 * could be accessing them.	 */	needLock = !RELATION_IS_LOCAL(relation);	if (needLock)		LockRelationForExtension(relation, ExclusiveLock);	/*	 * XXX This does an lseek - rather expensive - but at the moment it is the	 * only way to accurately determine how many blocks are in a relation.	Is	 * it worth keeping an accurate file length in shared memory someplace,	 * rather than relying on the kernel to do it for us?	 */	buffer = ReadBufferBI(relation, P_NEW, bistate);	/*	 * We can be certain that locking the otherBuffer first is OK, since it	 * must have a lower page number.	 */	if (otherBuffer != InvalidBuffer)		LockBuffer(otherBuffer, BUFFER_LOCK_EXCLUSIVE);	/*	 * Now acquire lock on the new page.	 */	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);	/*	 * Release the file-extension lock; it's now OK for someone else to extend	 * the relation some more.	Note that we cannot release this lock before	 * we have buffer lock on the new page, or we risk a race condition	 * against vacuumlazy.c --- see comments therein.	 */	if (needLock)		UnlockRelationForExtension(relation, ExclusiveLock);	/*	 * We need to initialize the empty new page.  Double-check that it really	 * is empty (this should never happen, but if it does we don't want to	 * risk wiping out valid data).	 */	page = BufferGetPage(buffer);	if (!PageIsNew(page))		elog(ERROR, "page %u of relation /"%s/" should be empty but is not",			 BufferGetBlockNumber(buffer),			 RelationGetRelationName(relation));	PageInit(page, BufferGetPageSize(buffer), 0);	if (len > PageGetHeapFreeSpace(page))	{		/* We should not get here given the test at the top */		elog(PANIC, "tuple is too big: size %lu", (unsigned long) len);	}	/*	 * Remember the new page as our target for future insertions.	 *	 * XXX should we enter the new page into the free space map immediately,	 * or just keep it for this backend's exclusive use in the short run	 * (until VACUUM sees it)?	Seems to depend on whether you expect the	 * current backend to make more insertions or not, which is probably a	 * good bet most of the time.  So for now, don't add it to FSM yet.	 */	RelationSetTargetBlock(relation, BufferGetBlockNumber(buffer));	return buffer;}
开发者ID:adunstan,项目名称:postgresql-dev,代码行数:101,


示例26: ginRedoUpdateMetapage

static voidginRedoUpdateMetapage(XLogReaderState *record){	XLogRecPtr	lsn = record->EndRecPtr;	ginxlogUpdateMeta *data = (ginxlogUpdateMeta *) XLogRecGetData(record);	Buffer		metabuffer;	Page		metapage;	Buffer		buffer;	/*	 * Restore the metapage. This is essentially the same as a full-page	 * image, so restore the metapage unconditionally without looking at the	 * LSN, to avoid torn page hazards.	 */	metabuffer = XLogInitBufferForRedo(record, 0);	Assert(BufferGetBlockNumber(metabuffer) == GIN_METAPAGE_BLKNO);	metapage = BufferGetPage(metabuffer);	GinInitPage(metapage, GIN_META, BufferGetPageSize(metabuffer));	memcpy(GinPageGetMeta(metapage), &data->metadata, sizeof(GinMetaPageData));	PageSetLSN(metapage, lsn);	MarkBufferDirty(metabuffer);	if (data->ntuples > 0)	{		/*		 * insert into tail page		 */		if (XLogReadBufferForRedo(record, 1, &buffer) == BLK_NEEDS_REDO)		{			Page		page = BufferGetPage(buffer);			OffsetNumber off;			int			i;			Size		tupsize;			char	   *payload;			IndexTuple	tuples;			Size		totaltupsize;			payload = XLogRecGetBlockData(record, 1, &totaltupsize);			tuples = (IndexTuple) payload;			if (PageIsEmpty(page))				off = FirstOffsetNumber;			else				off = OffsetNumberNext(PageGetMaxOffsetNumber(page));			for (i = 0; i < data->ntuples; i++)			{				tupsize = IndexTupleSize(tuples);				if (PageAddItem(page, (Item) tuples, tupsize, off,								false, false) == InvalidOffsetNumber)					elog(ERROR, "failed to add item to index page");				tuples = (IndexTuple) (((char *) tuples) + tupsize);				off++;			}			Assert(payload + totaltupsize == (char *) tuples);			/*			 * Increase counter of heap tuples			 */			GinPageGetOpaque(page)->maxoff++;			PageSetLSN(page, lsn);			MarkBufferDirty(buffer);		}		if (BufferIsValid(buffer))			UnlockReleaseBuffer(buffer);	}	else if (data->prevTail != InvalidBlockNumber)	{		/*		 * New tail		 */		if (XLogReadBufferForRedo(record, 1, &buffer) == BLK_NEEDS_REDO)		{			Page		page = BufferGetPage(buffer);			GinPageGetOpaque(page)->rightlink = data->newRightlink;			PageSetLSN(page, lsn);			MarkBufferDirty(buffer);		}		if (BufferIsValid(buffer))			UnlockReleaseBuffer(buffer);	}	UnlockReleaseBuffer(metabuffer);}
开发者ID:winlibs,项目名称:postgresql,代码行数:91,


示例27: _bt_getbuf

/* *	_bt_getbuf() -- Get a buffer by block number for read or write. * *		blkno == P_NEW means to get an unallocated index page.	The page *		will be initialized before returning it. * *		When this routine returns, the appropriate lock is set on the *		requested buffer and its reference count has been incremented *		(ie, the buffer is "locked and pinned").  Also, we apply *		_bt_checkpage to sanity-check the page (except in P_NEW case). */Buffer_bt_getbuf(Relation rel, BlockNumber blkno, int access){	Buffer		buf;	MIRROREDLOCK_BUFMGR_MUST_ALREADY_BE_HELD;	if (blkno != P_NEW)	{		/* Read an existing block of the relation */		buf = ReadBuffer(rel, blkno);		LockBuffer(buf, access);		_bt_checkpage(rel, buf);	}	else	{		bool		needLock;		Page		page;		Assert(access == BT_WRITE);		/*		 * First see if the FSM knows of any free pages.		 *		 * We can't trust the FSM's report unreservedly; we have to check that		 * the page is still free.	(For example, an already-free page could		 * have been re-used between the time the last VACUUM scanned it and		 * the time the VACUUM made its FSM updates.)		 *		 * In fact, it's worse than that: we can't even assume that it's safe		 * to take a lock on the reported page.  If somebody else has a lock		 * on it, or even worse our own caller does, we could deadlock.  (The		 * own-caller scenario is actually not improbable. Consider an index		 * on a serial or timestamp column.  Nearly all splits will be at the		 * rightmost page, so it's entirely likely that _bt_split will call us		 * while holding a lock on the page most recently acquired from FSM. A		 * VACUUM running concurrently with the previous split could well have		 * placed that page back in FSM.)		 *		 * To get around that, we ask for only a conditional lock on the		 * reported page.  If we fail, then someone else is using the page,		 * and we may reasonably assume it's not free.  (If we happen to be		 * wrong, the worst consequence is the page will be lost to use till		 * the next VACUUM, which is no big problem.)		 */		for (;;)		{			blkno = GetFreeIndexPage(&rel->rd_node);			if (blkno == InvalidBlockNumber)				break;			buf = ReadBuffer(rel, blkno);			if (ConditionalLockBuffer(buf))			{				page = BufferGetPage(buf);				if (_bt_page_recyclable(page))				{					/* Okay to use page.  Re-initialize and return it */					_bt_pageinit(page, BufferGetPageSize(buf));					return buf;				}				elog(DEBUG2, "FSM returned nonrecyclable page");				_bt_relbuf(rel, buf);			}			else			{				elog(DEBUG2, "FSM returned nonlockable page");				/* couldn't get lock, so just drop pin */				ReleaseBuffer(buf);			}		}		/*		 * Extend the relation by one page.		 *		 * We have to use a lock to ensure no one else is extending the rel at		 * the same time, else we will both try to initialize the same new		 * page.  We can skip locking for new or temp relations, however,		 * since no one else could be accessing them.		 */		needLock = !RELATION_IS_LOCAL(rel);		if (needLock)			LockRelationForExtension(rel, ExclusiveLock);		buf = ReadBuffer(rel, P_NEW);		/* Acquire buffer lock on new page */		LockBuffer(buf, BT_WRITE);//.........这里部分代码省略.........
开发者ID:50wu,项目名称:gpdb,代码行数:101,


示例28: _hash_splitbucket

/* * _hash_splitbucket -- split 'obucket' into 'obucket' and 'nbucket' * * We are splitting a bucket that consists of a base bucket page and zero * or more overflow (bucket chain) pages.  We must relocate tuples that * belong in the new bucket, and compress out any free space in the old * bucket. * * The caller must hold exclusive locks on both buckets to ensure that * no one else is trying to access them (see README). * * The caller must hold a pin, but no lock, on the metapage buffer. * The buffer is returned in the same state.  (The metapage is only * touched if it becomes necessary to add or remove overflow pages.) */static void_hash_splitbucket(Relation rel,				  Buffer metabuf,				  Bucket obucket,				  Bucket nbucket,				  BlockNumber start_oblkno,				  BlockNumber start_nblkno,				  uint32 maxbucket,				  uint32 highmask,				  uint32 lowmask){	Bucket		bucket;	Buffer		obuf;	Buffer		nbuf;	BlockNumber oblkno;	BlockNumber nblkno;	bool		null;	Datum		datum;	HashItem	hitem;	HashPageOpaque oopaque;	HashPageOpaque nopaque;	IndexTuple	itup;	Size		itemsz;	OffsetNumber ooffnum;	OffsetNumber noffnum;	OffsetNumber omaxoffnum;	Page		opage;	Page		npage;	TupleDesc	itupdesc = RelationGetDescr(rel);	/*	 * It should be okay to simultaneously write-lock pages from each	 * bucket, since no one else can be trying to acquire buffer lock	 * on pages of either bucket.	 */	oblkno = start_oblkno;	nblkno = start_nblkno;	obuf = _hash_getbuf(rel, oblkno, HASH_WRITE);	nbuf = _hash_getbuf(rel, nblkno, HASH_WRITE);	opage = BufferGetPage(obuf);	npage = BufferGetPage(nbuf);	_hash_checkpage(rel, opage, LH_BUCKET_PAGE);	oopaque = (HashPageOpaque) PageGetSpecialPointer(opage);	/* initialize the new bucket's primary page */	_hash_pageinit(npage, BufferGetPageSize(nbuf));	nopaque = (HashPageOpaque) PageGetSpecialPointer(npage);	nopaque->hasho_prevblkno = InvalidBlockNumber;	nopaque->hasho_nextblkno = InvalidBlockNumber;	nopaque->hasho_bucket = nbucket;	nopaque->hasho_flag = LH_BUCKET_PAGE;	nopaque->hasho_filler = HASHO_FILL;	/*	 * Partition the tuples in the old bucket between the old bucket and the	 * new bucket, advancing along the old bucket's overflow bucket chain	 * and adding overflow pages to the new bucket as needed.	 */	ooffnum = FirstOffsetNumber;	omaxoffnum = PageGetMaxOffsetNumber(opage);	for (;;)	{		/*		 * at each iteration through this loop, each of these variables		 * should be up-to-date: obuf opage oopaque ooffnum omaxoffnum		 */		/* check if we're at the end of the page */		if (ooffnum > omaxoffnum)		{			/* at end of page, but check for an(other) overflow page */			oblkno = oopaque->hasho_nextblkno;			if (!BlockNumberIsValid(oblkno))				break;			/*			 * we ran out of tuples on this particular page, but we			 * have more overflow pages; advance to next page.			 */			_hash_wrtbuf(rel, obuf);			obuf = _hash_getbuf(rel, oblkno, HASH_WRITE);			opage = BufferGetPage(obuf);			_hash_checkpage(rel, opage, LH_OVERFLOW_PAGE);			oopaque = (HashPageOpaque) PageGetSpecialPointer(opage);//.........这里部分代码省略.........
开发者ID:sunyangkobe,项目名称:cscd43,代码行数:101,


示例29: lazy_scan_heap

//.........这里部分代码省略.........		page = BufferGetPage(buf);		if (PageIsNew(page))		{			/*			 * An all-zeroes page could be left over if a backend extends the			 * relation but crashes before initializing the page. Reclaim such			 * pages for use.			 *			 * We have to be careful here because we could be looking at a			 * page that someone has just added to the relation and not yet			 * been able to initialize (see RelationGetBufferForTuple). To			 * protect against that, release the buffer lock, grab the			 * relation extension lock momentarily, and re-lock the buffer. If			 * the page is still uninitialized by then, it must be left over			 * from a crashed backend, and we can initialize it.			 *			 * We don't really need the relation lock when this is a new or			 * temp relation, but it's probably not worth the code space to			 * check that, since this surely isn't a critical path.			 *			 * Note: the comparable code in vacuum.c need not worry because			 * it's got exclusive lock on the whole relation.			 */			LockBuffer(buf, BUFFER_LOCK_UNLOCK);			LockRelationForExtension(onerel, ExclusiveLock);			UnlockRelationForExtension(onerel, ExclusiveLock);			LockBufferForCleanup(buf);			if (PageIsNew(page))			{				ereport(WARNING,				(errmsg("relation /"%s/" page %u is uninitialized --- fixing",						relname, blkno)));				PageInit(page, BufferGetPageSize(buf), 0);				empty_pages++;			}			freespace = PageGetHeapFreeSpace(page);			MarkBufferDirty(buf);			UnlockReleaseBuffer(buf);			RecordPageWithFreeSpace(onerel, blkno, freespace);			continue;		}		if (PageIsEmpty(page))		{			empty_pages++;			freespace = PageGetHeapFreeSpace(page);			if (!PageIsAllVisible(page))			{				PageSetAllVisible(page);				SetBufferCommitInfoNeedsSave(buf);			}			LockBuffer(buf, BUFFER_LOCK_UNLOCK);			/* Update the visibility map */			if (!all_visible_according_to_vm)			{				visibilitymap_pin(onerel, blkno, &vmbuffer);				LockBuffer(buf, BUFFER_LOCK_SHARE);				if (PageIsAllVisible(page))					visibilitymap_set(onerel, blkno, PageGetLSN(page), &vmbuffer);				LockBuffer(buf, BUFFER_LOCK_UNLOCK);			}
开发者ID:hl0103,项目名称:pgxc,代码行数:67,


示例30: lazy_scan_heap

//.........这里部分代码省略.........			 *			 * We have to be careful here because we could be looking at a			 * page that someone has just added to the relation and not yet			 * been able to initialize (see RelationGetBufferForTuple). To			 * protect against that, release the buffer lock, grab the			 * relation extension lock momentarily, and re-lock the buffer. If			 * the page is still uninitialized by then, it must be left over			 * from a crashed backend, and we can initialize it.			 *			 * We don't really need the relation lock when this is a new or			 * temp relation, but it's probably not worth the code space to			 * check that, since this surely isn't a critical path.			 *			 * Note: the comparable code in vacuum.c need not worry because			 * it's got exclusive lock on the whole relation.			 */			LockBuffer(buf, BUFFER_LOCK_UNLOCK);			MIRROREDLOCK_BUFMGR_UNLOCK;			/* -------- MirroredLock ---------- */			LockRelationForExtension(onerel, ExclusiveLock);			UnlockRelationForExtension(onerel, ExclusiveLock);			/* -------- MirroredLock ---------- */			MIRROREDLOCK_BUFMGR_LOCK;			LockBufferForCleanup(buf);			if (PageIsNew(page))			{				ereport(WARNING,				(errmsg("relation /"%s/" page %u is uninitialized --- fixing",						relname, blkno)));				PageInit(page, BufferGetPageSize(buf), 0);				/* must record in xlog so that changetracking will know about this change */				log_heap_newpage(onerel, page, blkno);				empty_pages++;				lazy_record_free_space(vacrelstats, blkno,									   PageGetHeapFreeSpace(page));			}			MarkBufferDirty(buf);			UnlockReleaseBuffer(buf);			MIRROREDLOCK_BUFMGR_UNLOCK;			/* -------- MirroredLock ---------- */			continue;		}		if (PageIsEmpty(page))		{			empty_pages++;			lazy_record_free_space(vacrelstats, blkno,								   PageGetHeapFreeSpace(page));			UnlockReleaseBuffer(buf);			MIRROREDLOCK_BUFMGR_UNLOCK;			/* -------- MirroredLock ---------- */			continue;		}		/*		 * Prune all HOT-update chains in this page.
开发者ID:phan-pivotal,项目名称:gpdb,代码行数:67,



注:本文中的BufferGetPageSize函数示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


C++ BufferIsValid函数代码示例
C++ BufferGetBlockNumber函数代码示例
万事OK自学网:51自学网_软件自学网_CAD自学网自学excel、自学PS、自学CAD、自学C语言、自学css3实例,是一个通过网络自主学习工作技能的自学平台,网友喜欢的软件自学网站。