这篇教程C++ swp_entry函数代码示例写得很实用,希望能帮到您。
本文整理汇总了C++中swp_entry函数的典型用法代码示例。如果您正苦于以下问题:C++ swp_entry函数的具体用法?C++ swp_entry怎么用?C++ swp_entry使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。 在下文中一共展示了swp_entry函数的18个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的C++代码示例。 示例1: swp_offset/** * swapin_readahead - swap in pages in hope we need them soon * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vma: user vma this address belongs to * @addr: target address for mempolicy * * Returns the struct page for entry and addr, after queueing swapin. * * Primitive swap readahead code. We simply read an aligned block of * (1 << page_cluster) entries in the swap area. This method is chosen * because it doesn't cost us any seek time. We also make sure to queue * the 'original' request together with the readahead ones... * * This has been extended to use the NUMA policies from the mm triggering * the readahead. * * Caller must hold down_read on the vma->vm_mm if vma is not NULL. */struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr){#ifdef CONFIG_SWAP_ENABLE_READAHEAD struct page *page; unsigned long offset = swp_offset(entry); unsigned long start_offset, end_offset; unsigned long mask = (1UL << page_cluster) - 1; struct blk_plug plug; /* Read a page_cluster sized and aligned cluster around offset. */ start_offset = offset & ~mask; end_offset = offset | mask; if (!start_offset) /* First page is swap header. */ start_offset++; blk_start_plug(&plug); for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ page = read_swap_cache_async(swp_entry(swp_type(entry), offset), gfp_mask, vma, addr); if (!page) continue; page_cache_release(page); } blk_finish_plug(&plug); lru_add_drain(); /* Push any new pages onto the LRU now */#endif /* CONFIG_SWAP_ENABLE_READAHEAD */ return read_swap_cache_async(entry, gfp_mask, vma, addr);}
开发者ID:BigBot96,项目名称:android_kernel_samsung_gts2wifi,代码行数:50,
示例2: swp_offsetstruct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr){ struct page *page; unsigned long offset = swp_offset(entry); unsigned long start_offset, end_offset; unsigned long mask = (1UL << page_cluster) - 1; start_offset = offset & ~mask; end_offset = offset | mask; if (!start_offset) start_offset++; for (offset = start_offset; offset <= end_offset ; offset++) { page = read_swap_cache_async(swp_entry(swp_type(entry), offset), gfp_mask, vma, addr); if (!page) continue; page_cache_release(page); } lru_add_drain(); return read_swap_cache_async(entry, gfp_mask, vma, addr);}
开发者ID:Alex-V2,项目名称:One_M8_4.4.3_kernel,代码行数:25,
示例3: swp_offset/** * swapin_readahead - swap in pages in hope we need them soon * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vma: user vma this address belongs to * @addr: target address for mempolicy * * Returns the struct page for entry and addr, after queueing swapin. * * Primitive swap readahead code. We simply read an aligned block of * (1 << page_cluster) entries in the swap area. This method is chosen * because it doesn't cost us any seek time. We also make sure to queue * the 'original' request together with the readahead ones... * * This has been extended to use the NUMA policies from the mm triggering * the readahead. * * Caller must hold down_read on the vma->vm_mm if vma is not NULL. */struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr){ struct page *page; unsigned long offset = swp_offset(entry); unsigned long start_offset, end_offset; unsigned long mask = is_swap_fast(entry) ? 0 : (1UL << page_cluster) - 1; /* Read a page_cluster sized and aligned cluster around offset. */ start_offset = offset & ~mask; end_offset = offset | mask; if (!start_offset) /* First page is swap header. */ start_offset++; for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ page = read_swap_cache_async(swp_entry(swp_type(entry), offset), gfp_mask, vma, addr); if (!page) continue; page_cache_release(page); } lru_add_drain(); /* Push any new pages onto the LRU now */ return read_swap_cache_async(entry, gfp_mask, vma, addr);}
开发者ID:qqzwc,项目名称:Solid_Kernel-G3-STOCK-MM,代码行数:45,
示例4: valid_swaphandles/** * swapin_readahead - swap in pages in hope we need them soon * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vma: user vma this address belongs to * @addr: target address for mempolicy * * Returns the struct page for entry and addr, after queueing swapin. * * Primitive swap readahead code. We simply read an aligned block of * (1 << page_cluster) entries in the swap area. This method is chosen * because it doesn't cost us any seek time. We also make sure to queue * the 'original' request together with the readahead ones... * * This has been extended to use the NUMA policies from the mm triggering * the readahead. * * Caller must hold down_read on the vma->vm_mm if vma is not NULL. */struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr){ int nr_pages; struct page *page; unsigned long offset; unsigned long end_offset; /* * Get starting offset for readaround, and number of pages to read. * Adjust starting address by readbehind (for NUMA interleave case)? * No, it's very unlikely that swap layout would follow vma layout, * more likely that neighbouring swap pages came from the same node: * so use the same "addr" to choose the same node for each swap read. */ nr_pages = valid_swaphandles(entry, &offset); for (end_offset = offset + nr_pages; offset < end_offset; offset++) { /* Ok, do the async read-ahead now */ page = read_swap_cache_async(swp_entry(swp_type(entry), offset), gfp_mask, vma, addr); if (!page) break; page_cache_release(page); } lru_add_drain(); /* Push any new pages onto the LRU now */ return read_swap_cache_async(entry, gfp_mask, vma, addr);}
开发者ID:yl849646685,项目名称:linux-2.6.32,代码行数:46,
示例5: write_pagestatic int write_page(void *buf, unsigned long offset){ swp_entry_t entry; int error = -ENOSPC; if (offset) { entry = swp_entry(root_swap, offset); error = rw_swap_page_sync(WRITE, entry, virt_to_page(buf)); } return error;}
开发者ID:FatSunHYS,项目名称:OSCourseDesign,代码行数:11,
示例6: alloc_swapdev_blocksector_t alloc_swapdev_block(int swap, struct bitmap_page *bitmap){ unsigned long offset; offset = swp_offset(get_swap_page_of_type(swap)); if (offset) { if (bitmap_set(bitmap, offset)) swap_free(swp_entry(swap, offset)); else return swapdev_block(swap, offset); } return 0;}
开发者ID:xiandaicxsj,项目名称:copyKvm,代码行数:13,
示例7: mark_swapfilesstatic int mark_swapfiles(swp_entry_t start){ int error; rw_swap_page_sync(READ, swp_entry(root_swap, 0), virt_to_page((unsigned long)&swsusp_header)); if (!memcmp("SWAP-SPACE",swsusp_header.sig, 10) || !memcmp("SWAPSPACE2",swsusp_header.sig, 10)) { memcpy(swsusp_header.orig_sig,swsusp_header.sig, 10); memcpy(swsusp_header.sig,SWSUSP_SIG, 10); swsusp_header.image = start; error = rw_swap_page_sync(WRITE, swp_entry(root_swap, 0), virt_to_page((unsigned long) &swsusp_header)); } else { pr_debug("swsusp: Partition is not swap space./n"); error = -ENODEV; } return error;}
开发者ID:FatSunHYS,项目名称:OSCourseDesign,代码行数:22,
示例8: mark_swapfilesstatic int mark_swapfiles(swp_entry_t prev){ int error; rw_swap_page_sync(READ, swp_entry(root_swap, 0), virt_to_page((unsigned long)&pmdisk_header)); if (!memcmp("SWAP-SPACE",pmdisk_header.sig,10) || !memcmp("SWAPSPACE2",pmdisk_header.sig,10)) { memcpy(pmdisk_header.orig_sig,pmdisk_header.sig,10); memcpy(pmdisk_header.sig,PMDISK_SIG,10); pmdisk_header.pmdisk_info = prev; error = rw_swap_page_sync(WRITE, swp_entry(root_swap, 0), virt_to_page((unsigned long) &pmdisk_header)); } else { pr_debug("pmdisk: Partition is not swap space./n"); error = -ENODEV; } return error;}
开发者ID:FelipeFernandes1988,项目名称:Alice-1121-Modem,代码行数:22,
示例9: alloc_swap_pageunsigned long alloc_swap_page(int swap, struct bitmap_page *bitmap){ unsigned long offset; offset = swp_offset(get_swap_page_of_type(swap)); if (offset) { if (bitmap_set(bitmap, offset)) { swap_free(swp_entry(swap, offset)); offset = 0; } } return offset;}
开发者ID:BackupTheBerlios,项目名称:arp2-svn,代码行数:13,
示例10: free_all_swap_pagesvoid free_all_swap_pages(int swap, struct bitmap_page *bitmap){ unsigned int bit, n; unsigned long test; bit = 0; while (bitmap) { for (n = 0; n < BITMAP_PAGE_CHUNKS; n++) for (test = 1UL; test; test <<= 1) { if (bitmap->chunks[n] & test) swap_free(swp_entry(swap, bit)); bit++; } bitmap = bitmap->next; }}
开发者ID:BackupTheBerlios,项目名称:arp2-svn,代码行数:16,
示例11: get_swap_page_of_typeswp_entry_t get_swap_page_of_type(int type){ struct swap_info_struct *si; pgoff_t offset; spin_lock(&swap_lock); si = swap_info + type; if (si->flags & SWP_WRITEOK) { nr_swap_pages--; offset = scan_swap_map(si); if (offset) { spin_unlock(&swap_lock); return swp_entry(type, offset); } nr_swap_pages++; } spin_unlock(&swap_lock); return (swp_entry_t) {0};}
开发者ID:acassis,项目名称:emlinux-ssd1935,代码行数:19,
示例12: get_swap_pageswp_entry_t get_swap_page(void){ struct swap_info_struct *si; pgoff_t offset; int type, next; int wrapped = 0; spin_lock(&swap_lock); if (nr_swap_pages <= 0) goto noswap; nr_swap_pages--; for (type = swap_list.next; type >= 0 && wrapped < 2; type = next) { si = swap_info + type; next = si->next; if (next < 0 || (!wrapped && si->prio != swap_info[next].prio)) { next = swap_list.head; wrapped++; } if (!si->highest_bit) continue; if (!(si->flags & SWP_WRITEOK)) continue; swap_list.next = next; offset = scan_swap_map(si); if (offset) { spin_unlock(&swap_lock); return swp_entry(type, offset); } next = swap_list.next; } nr_swap_pages++;noswap: spin_unlock(&swap_lock); return (swp_entry_t) {0};}
开发者ID:acassis,项目名称:emlinux-ssd1935,代码行数:40,
示例13: swsusp_writeint swsusp_write(void){ struct swap_map_handle handle; struct snapshot_handle snapshot; struct swsusp_info *header; int error; if ((error = swsusp_swap_check())) { printk(KERN_ERR "swsusp: Cannot find swap device, try swapon -a./n"); return error; } memset(&snapshot, 0, sizeof(struct snapshot_handle)); error = snapshot_read_next(&snapshot, PAGE_SIZE); if (error < PAGE_SIZE) return error < 0 ? error : -EFAULT; header = (struct swsusp_info *)data_of(snapshot); if (!enough_swap(header->pages)) { printk(KERN_ERR "swsusp: Not enough free swap/n"); return -ENOSPC; } error = get_swap_writer(&handle); if (!error) { unsigned long start = handle.cur_swap; error = swap_write_page(&handle, header); if (!error) error = save_image(&handle, &snapshot, header->pages - 1); if (!error) { flush_swap_writer(&handle); printk("S"); error = mark_swapfiles(swp_entry(root_swap, start)); printk("|/n"); } } if (error) free_all_swap_pages(root_swap, handle.bitmap); release_swap_writer(&handle); return error;}
开发者ID:FatSunHYS,项目名称:OSCourseDesign,代码行数:39,
示例14: swp_offset/** * swapin_readahead - swap in pages in hope we need them soon * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vma: user vma this address belongs to * @addr: target address for mempolicy * * Returns the struct page for entry and addr, after queueing swapin. * * Primitive swap readahead code. We simply read an aligned block of * (1 << page_cluster) entries in the swap area. This method is chosen * because it doesn't cost us any seek time. We also make sure to queue * the 'original' request together with the readahead ones... * * This has been extended to use the NUMA policies from the mm triggering * the readahead. * * Caller must hold down_read on the vma->vm_mm if vma is not NULL. */struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr){ struct page *page; unsigned long entry_offset = swp_offset(entry); unsigned long offset = entry_offset; unsigned long start_offset, end_offset; unsigned long mask; struct blk_plug plug; bool do_poll = true; mask = swapin_nr_pages(offset) - 1; if (!mask) goto skip; do_poll = false; /* Read a page_cluster sized and aligned cluster around offset. */ start_offset = offset & ~mask; end_offset = offset | mask; if (!start_offset) /* First page is swap header. */ start_offset++; blk_start_plug(&plug); for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ page = read_swap_cache_async(swp_entry(swp_type(entry), offset), gfp_mask, vma, addr, false); if (!page) continue; if (offset != entry_offset && likely(!PageTransCompound(page))) SetPageReadahead(page); put_page(page); } blk_finish_plug(&plug); lru_add_drain(); /* Push any new pages onto the LRU now */skip: return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll);}
开发者ID:mdamt,项目名称:linux,代码行数:58,
示例15: mark_swapfilesstatic void mark_swapfiles(swp_entry_t prev, int mode){ swp_entry_t entry; union diskpage *cur; struct page *page; if (root_swap == 0xFFFF) /* ignored */ return; page = alloc_page(GFP_ATOMIC); if (!page) panic("Out of memory in mark_swapfiles"); cur = page_address(page); /* XXX: this is dirty hack to get first page of swap file */ entry = swp_entry(root_swap, 0); rw_swap_page_sync(READ, entry, page); if (mode == MARK_SWAP_RESUME) { if (!memcmp("S1",cur->swh.magic.magic,2)) memcpy(cur->swh.magic.magic,"SWAP-SPACE",10); else if (!memcmp("S2",cur->swh.magic.magic,2)) memcpy(cur->swh.magic.magic,"SWAPSPACE2",10); else printk("%sUnable to find suspended-data signature (%.10s - misspelled?/n", name_resume, cur->swh.magic.magic); } else { if ((!memcmp("SWAP-SPACE",cur->swh.magic.magic,10))) memcpy(cur->swh.magic.magic,"S1SUSP....",10); else if ((!memcmp("SWAPSPACE2",cur->swh.magic.magic,10))) memcpy(cur->swh.magic.magic,"S2SUSP....",10); else panic("/nSwapspace is not swapspace (%.10s)/n", cur->swh.magic.magic); cur->link.next = prev; /* prev is the first/last swap page of the resume area */ /* link.next lies *no more* in last 4/8 bytes of magic */ } rw_swap_page_sync(WRITE, entry, page); __free_page(page);}
开发者ID:xricson,项目名称:knoppix,代码行数:36,
示例16: zswap_frontswap_store/* attempts to compress and store an single page */static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *page){ struct zswap_tree *tree = zswap_trees[type]; struct zswap_entry *entry, *dupentry; int ret; unsigned int dlen = PAGE_SIZE, len; unsigned long handle; char *buf; u8 *src, *dst; struct zswap_header *zhdr; if (!tree) { ret = -ENODEV; goto reject; } /* reclaim space if needed */ if (zswap_is_full()) { zswap_pool_limit_hit++; if (zbud_reclaim_page(tree->pool, 8)) { zswap_reject_reclaim_fail++; ret = -ENOMEM; goto reject; } } /* allocate entry */ entry = zswap_entry_cache_alloc(GFP_KERNEL); if (!entry) { zswap_reject_kmemcache_fail++; ret = -ENOMEM; goto reject; } /* compress */ dst = get_cpu_var(zswap_dstmem); src = kmap_atomic(page); ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen); kunmap_atomic(src); if (ret) { ret = -EINVAL; goto freepage; } /* store */ len = dlen + sizeof(struct zswap_header); ret = zbud_alloc(tree->pool, len, __GFP_NORETRY | __GFP_NOWARN, &handle); if (ret == -ENOSPC) { zswap_reject_compress_poor++; goto freepage; } if (ret) { zswap_reject_alloc_fail++; goto freepage; } zhdr = zbud_map(tree->pool, handle); zhdr->swpentry = swp_entry(type, offset); buf = (u8 *)(zhdr + 1); memcpy(buf, dst, dlen); zbud_unmap(tree->pool, handle); put_cpu_var(zswap_dstmem); /* populate entry */ entry->offset = offset; entry->handle = handle; entry->length = dlen; /* map */ spin_lock(&tree->lock); do { ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry); if (ret == -EEXIST) { zswap_duplicate_entry++; /* remove from rbtree */ zswap_rb_erase(&tree->rbroot, dupentry); zswap_entry_put(tree, dupentry); } } while (ret == -EEXIST); spin_unlock(&tree->lock); /* update stats */ atomic_inc(&zswap_stored_pages); zswap_pool_pages = zbud_get_pool_size(tree->pool); return 0;freepage: put_cpu_var(zswap_dstmem); zswap_entry_cache_free(entry);reject: return ret;}
开发者ID:AnadoluPanteri,项目名称:kernel-plus-harmattan,代码行数:95,
示例17: zswap_frontswap_store/* attempts to compress and store an single page */static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *page){ struct zswap_tree *tree = zswap_trees[type]; struct zswap_entry *entry, *dupentry; int ret; unsigned int dlen = PAGE_SIZE, len; unsigned long handle; char *buf; u8 *src, *dst; struct zswap_header *zhdr; if (!tree) { ret = -ENODEV; goto reject; } /* if this page got EIO on pageout before, give up immediately */ if (PageError(page)) { ret = -ENOMEM; goto reject; } /* reclaim space if needed */ if (zswap_is_full()) { zswap_pool_limit_hit++; if (zpool_shrink(zswap_pool, 1, NULL)) { zswap_reject_reclaim_fail++; ret = -ENOMEM; goto reject; } } /* allocate entry */ entry = zswap_entry_cache_alloc(GFP_KERNEL); if (!entry) { zswap_reject_kmemcache_fail++; ret = -ENOMEM; goto reject; } /* compress */ src = kmap_atomic(page); if (page_zero_filled(src)) { atomic_inc(&zswap_zero_pages); entry->zero_flag = 1; kunmap_atomic(src); handle = 0; dlen = PAGE_SIZE; goto zeropage_out; } dst = get_cpu_var(zswap_dstmem); ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen); kunmap_atomic(src); if (ret) { ret = -EINVAL; goto freepage; } /* store */ len = dlen + sizeof(struct zswap_header); ret = zpool_malloc(zswap_pool, len, __GFP_NORETRY | __GFP_NOWARN, &handle); if (ret == -ENOSPC) { zswap_reject_compress_poor++; goto freepage; } if (ret) { zswap_reject_alloc_fail++; goto freepage; } zhdr = zpool_map_handle(zswap_pool, handle, ZPOOL_MM_RW); zhdr->swpentry = swp_entry(type, offset); buf = (u8 *)(zhdr + 1); memcpy(buf, dst, dlen); zpool_unmap_handle(zswap_pool, handle); put_cpu_var(zswap_dstmem);zeropage_out: /* populate entry */ entry->offset = offset; entry->handle = handle; entry->length = dlen; /* map */ spin_lock(&tree->lock); do { ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry); if (ret == -EEXIST) { zswap_duplicate_entry++; /* remove from rbtree */ zswap_rb_erase(&tree->rbroot, dupentry); zswap_entry_put(tree, dupentry); } } while (ret == -EEXIST); spin_unlock(&tree->lock);//.........这里部分代码省略.........
开发者ID:barryjabshire,项目名称:SimplKernel-LL-G925F,代码行数:101,
示例18: try_to_unuse/* * We completely avoid races by reading each swap page in advance, * and then search for the process using it. All the necessary * page table adjustments can then be made atomically. */static int try_to_unuse(unsigned int type){ struct swap_info_struct * si = &swap_info[type]; struct mm_struct *start_mm; unsigned short *swap_map; unsigned short swcount; struct page *page; swp_entry_t entry; unsigned int i = 0; int retval = 0; int reset_overflow = 0; int shmem; /* * When searching mms for an entry, a good strategy is to * start at the first mm we freed the previous entry from * (though actually we don't notice whether we or coincidence * freed the entry). Initialize this start_mm with a hold. * * A simpler strategy would be to start at the last mm we * freed the previous entry from; but that would take less * advantage of mmlist ordering, which clusters forked mms * together, child after parent. If we race with dup_mmap(), we * prefer to resolve parent before child, lest we miss entries * duplicated after we scanned child: using last mm would invert * that. Though it's only a serious concern when an overflowed * swap count is reset from SWAP_MAP_MAX, preventing a rescan. */ start_mm = &init_mm; atomic_inc(&init_mm.mm_users); /* * Keep on scanning until all entries have gone. Usually, * one pass through swap_map is enough, but not necessarily: * there are races when an instance of an entry might be missed. */ while ((i = find_next_to_unuse(si, i)) != 0) { if (signal_pending(current)) { retval = -EINTR; break; } /* * Get a page for the entry, using the existing swap * cache page if there is one. Otherwise, get a clean * page and read the swap into it. */ swap_map = &si->swap_map[i]; entry = swp_entry(type, i); page = read_swap_cache_async(entry, NULL, 0); if (!page) { /* * Either swap_duplicate() failed because entry * has been freed independently, and will not be * reused since sys_swapoff() already disabled * allocation from here, or alloc_page() failed. */ if (!*swap_map) continue; retval = -ENOMEM; break; } /* * Don't hold on to start_mm if it looks like exiting. */ if (atomic_read(&start_mm->mm_users) == 1) { mmput(start_mm); start_mm = &init_mm; atomic_inc(&init_mm.mm_users); } /* * Wait for and lock page. When do_swap_page races with * try_to_unuse, do_swap_page can handle the fault much * faster than try_to_unuse can locate the entry. This * apparently redundant "wait_on_page_locked" lets try_to_unuse * defer to do_swap_page in such a case - in some tests, * do_swap_page and try_to_unuse repeatedly compete. */ wait_on_page_locked(page); wait_on_page_writeback(page); lock_page(page); wait_on_page_writeback(page); /* * Remove all references to entry. * Whenever we reach init_mm, there's no address space * to search, but use it as a reminder to search shmem. */ shmem = 0; swcount = *swap_map; if (swcount > 1) { if (start_mm == &init_mm) shmem = shmem_unuse(entry, page);//.........这里部分代码省略.........
开发者ID:acassis,项目名称:emlinux-ssd1935,代码行数:101,
注:本文中的swp_entry函数示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 C++ swp_offset函数代码示例 C++ swoole_get_object函数代码示例 |