Gemini nano banana 製作的圖,解析度越來越高了 (目前為2.8K),不管是寫文章或簡報的配圖,幾乎都用它來產出,但多少會因圖片上的watermark,覺得多少有點瑕疵。
所以構思撰寫個網頁程式來移除浮水印的可行性,當然現在已很多online 網頁提供免費處理,但自己撰寫可以在未來任意擴充和處理不同種類浮水印的移除。
目前已擺放2個版本, 格友也可以自行試用:
版本一,快速處理右下角浮水印:
https://iaiguidance.com/remove/index.html

版本二,慢速,但可以處理不同浮水印位置: https://iaiguidance.com/remove/index2.html

這段程式碼是被取名為 Clear Nano Watermark Pro V3.2 的網頁工具,其核心用途是自動化且精準地移除圖片中特定位置的浮水印或 UI 覆蓋物(Gemini nano浮水印目前位置固定,當然,不同解析的圖,對應浮水印大小有所差別)。
它結合了電腦視覺(Computer Vision)中的模板匹配技術與影像處理演算法。以下是詳細的用途與原理分析:
1. 主要用途
- 自動偵測與移除: 針對預設好的遮罩(如程式中的 bg_48.png 與 bg_96.png),自動在圖片的角落搜尋匹配的位置。影像修復(去浮水印): 透過「反向 Alpha 混合」演算法,嘗試將被半透明白色遮罩覆蓋的底圖還原,達到「消除」的效果。
- 高解析度優化: 特別針對大型長圖(如 2816px)進行了 Y 軸點擊精度修正,適合處理精細的視覺素材。
<!DOCTYPE html>
<html lang="zh-TW">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ClearNano Pro V3.2 - 終極精準工作流版</title>
<link href="https://fonts.googleapis.com/css2?family=Noto+Sans+TC:wght@400;500;700&family=Inter:wght@400;600&display=swap" rel="stylesheet">
<style>
:root {
--color-bg: #0f0f13;
--color-bg-secondary: #1a1a23;
--color-primary: #8b5cf6;
--color-primary-hover: #7c3aed;
--color-success: #10b981;
--color-text: #f8fafc;
--color-border: #2d2d39;
}
body { font-family: 'Inter', 'Noto Sans TC', sans-serif; background-color: var(--color-bg); color: var(--color-text); margin: 0; padding: 20px; }
.container { max-width: 1200px; margin: 0 auto; }
.header { text-align: center; margin-bottom: 40px; }
/* 上傳區塊 */
#uploadSection { display: block; }
.upload-zone {
background: var(--color-bg-secondary); border: 2px dashed var(--color-border); border-radius: 24px;
padding: 80px 60px; text-align: center; cursor: pointer; transition: 0.3s;
}
.upload-zone.drag-over { border-color: var(--color-primary); background: rgba(139, 92, 246, 0.1); }
/* 結果區塊 */
#resultsSection { display: none; margin-top: 20px; }
.results-header { display: flex; justify-content: space-between; align-items: center; margin-bottom: 30px; }
.results-grid { display: grid; grid-template-columns: repeat(auto-fill, minmax(300px, 1fr)); gap: 25px; }
.result-card { background: var(--color-bg-secondary); border-radius: 18px; overflow: hidden; border: 1px solid var(--color-border); cursor: pointer; transition: 0.2s; }
.result-card:hover { border-color: var(--color-primary); }
.result-thumb { width: 100%; aspect-ratio: 16/10; object-fit: cover; display: block; }
.result-info { padding: 15px; font-size: 13px; display: flex; justify-content: space-between; align-items: center; }
/* Modal 編輯器 */
.modal { position: fixed; inset: 0; z-index: 1000; display: none; align-items: center; justify-content: center; }
.modal.active { display: flex; }
.modal-overlay { position: absolute; inset: 0; background: rgba(0,0,0,0.92); backdrop-filter: blur(10px); }
.modal-content {
position: relative; width: 95%; max-width: 1400px; height: 90vh;
background: var(--color-bg-secondary); border-radius: 24px;
display: flex; flex-direction: column; overflow: hidden;
}
.modal-body { flex: 1; position: relative; overflow: auto; background: #000; display: flex; align-items: center; justify-content: center; }
/* 修正後的預覽容器 */
.preview-wrapper { position: relative; display: inline-flex; justify-content: center; align-items: center; }
#selectionBox {
position: absolute; border: 2px solid #ef4444; background: rgba(239, 68, 68, 0.2);
pointer-events: none; display: none; z-index: 5; box-sizing: border-box;
}
#previewImage { max-width: 100%; max-height: 72vh; display: block; cursor: crosshair; margin: 0 auto; }
.modal-footer { padding: 25px; background: #12121a; border-top: 1px solid var(--color-border); display: flex; flex-wrap: wrap; gap: 20px; align-items: center; }
.control-group { display: flex; align-items: center; gap: 12px; font-size: 14px; }
input[type="range"] { width: 220px; accent-color: var(--color-primary); }
.btn { padding: 10px 20px; border-radius: 10px; border: none; font-weight: 600; cursor: pointer; display: inline-flex; align-items: center; gap: 8px; }
.btn-primary { background: var(--color-primary); color: white; }
.btn-secondary { background: #334155; color: white; }
.btn-success { background: var(--color-success); color: white; }
.btn-outline { background: transparent; border: 1px solid var(--color-border); color: var(--color-text); }
</style>
</head>
<body>
<div class="container">
<header class="header">
<h1>ClearNano <span style="color:var(--color-primary)">V3.2 Pro</span></h1>
<p>修正 Y 軸點擊精度、下載功能與全圖自動搜尋</p>
</header>
<section id="uploadSection">
<div class="upload-zone" id="dropZone">
<svg width="48" height="48" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" style="margin-bottom: 15px; color: var(--color-primary);">
<path d="M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4M17 8l-5-5-5 5M12 3v12"/>
</svg>
<h3>拖放圖片、點擊上傳或 Ctrl+V 貼上</h3>
<p>已針對 2816px 高解析度長圖優化 Y 軸點擊精度</p>
<input type="file" id="fileInput" accept="image/*" multiple hidden>
</div>
</section>
<section id="resultsSection">
<div class="results-header">
<h2>處理結果</h2>
<button class="btn btn-outline" id="backBtn">返回上傳畫面</button>
</div>
<div id="statusSection" style="text-align:center; padding: 20px; display:none; color:var(--color-primary);">AI 正在智慧搜尋並對位...</div>
<div class="results-grid" id="resultsGrid"></div>
</section>
</div>
<div class="modal" id="editorModal">
<div class="modal-overlay" id="closeModal"></div>
<div class="modal-content">
<div class="modal-body">
<div class="preview-wrapper">
<div id="selectionBox"></div>
<img id="previewImage" src="" alt="Preview">
</div>
</div>
<div class="modal-footer">
<div class="control-group">
<label>X 偏移:</label>
<input type="range" id="offsetX" min="-3000" max="3000" value="0">
<span id="valX">0</span>px
</div>
<div class="control-group">
<label>Y 偏移:</label>
<input type="range" id="offsetY" min="-3000" max="3000" value="0">
<span id="valY">0</span>px
</div>
<div style="margin-left: auto; display: flex; gap: 10px;">
<button class="btn btn-secondary" id="btnViewOriginal">對比原圖</button>
<button class="btn btn-primary" id="autoAlignBtn">精細吸附</button>
<button class="btn btn-success" id="reprocessDownloadBtn">套用並下載</button>
</div>
</div>
</div>
</div>
<script>
class ClearNanoV3_2 {
constructor() {
this.masks = { 48: "assets/bg_48.png", 96: "assets/bg_96.png" };
this.loadedMasks = {};
this.processedImages = [];
this.editingIndex = null;
this.init();
}
async init() {
await this.loadMasks();
this.bindEvents();
}
async loadMasks() {
for (const [size, path] of Object.entries(this.masks)) {
try {
const img = new Image(); img.src = path;
await new Promise(res => img.onload = res);
const canvas = document.createElement("canvas");
canvas.width = canvas.height = size;
const ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
const data = ctx.getImageData(0, 0, size, size);
const edges = this.applySobel(data);
let sum = 0, sumSq = 0;
for (let v of edges) { sum += v; sumSq += v * v; }
const mean = sum / edges.length;
this.loadedMasks[size] = {
data: data.data, size: parseInt(size), edges, mean,
std: Math.sqrt(sumSq / edges.length - mean * mean)
};
} catch (e) { console.error("遮罩載入失敗", size); }
}
}
applySobel(imageData) {
const { width, height, data } = imageData;
const grayscale = new Float32Array(width * height);
const output = new Float32Array(width * height);
for (let i = 0; i < data.length; i += 4) grayscale[i/4] = (data[i]*0.299 + data[i+1]*0.587 + data[i+2]*0.114);
for (let y = 1; y < height - 1; y++) {
for (let x = 1; x < width - 1; x++) {
const idx = y * width + x;
const gx = -grayscale[idx-width-1] + grayscale[idx-width+1] - 2*grayscale[idx-1] + 2*grayscale[idx+1] - grayscale[idx+width-1] + grayscale[idx+width+1];
const gy = -grayscale[idx-width-1] - 2*grayscale[idx-width] - grayscale[idx-width+1] + grayscale[idx+width-1] + 2*grayscale[idx+width] + grayscale[idx+width+1];
output[idx] = Math.sqrt(gx * gx + gy * gy);
}
}
return output;
}
bindEvents() {
const dropZone = document.getElementById('dropZone');
const fileInput = document.getElementById('fileInput');
const backBtn = document.getElementById('backBtn');
backBtn.onclick = () => {
document.getElementById('resultsSection').style.display = 'none';
document.getElementById('uploadSection').style.display = 'block';
this.processedImages = [];
document.getElementById('resultsGrid').innerHTML = "";
};
dropZone.onclick = () => fileInput.click();
fileInput.onchange = (e) => this.processFiles(e.target.files);
dropZone.ondragover = (e) => { e.preventDefault(); dropZone.classList.add('drag-over'); };
dropZone.ondragleave = () => dropZone.classList.remove('drag-over');
dropZone.ondrop = (e) => {
e.preventDefault(); dropZone.classList.remove('drag-over');
const files = Array.from(e.dataTransfer.files).filter(f => f.type.startsWith("image/"));
if (files.length) this.processFiles(files);
};
document.onpaste = (e) => {
const items = (e.clipboardData || e.originalEvent.clipboardData).items;
for (const item of items) { if (item.type.indexOf("image") !== -1) this.processFiles([item.getAsFile()]); }
};
document.getElementById('previewImage').onclick = (e) => this.handleImageClick(e);
document.getElementById('reprocessDownloadBtn').onclick = () => this.applyManualOffset(true);
document.getElementById('autoAlignBtn').onclick = () => this.autoAlign();
document.getElementById('closeModal').onclick = () => document.getElementById('editorModal').classList.remove('active');
const btnView = document.getElementById('btnViewOriginal');
btnView.onmousedown = () => { if(this.editingIndex!==null) document.getElementById('previewImage').src = this.processedImages[this.editingIndex].originalUrl; };
btnView.onmouseup = btnView.onmouseleave = () => { if(this.editingIndex!==null) document.getElementById('previewImage').src = this.processedImages[this.editingIndex].currentUrl; };
['X', 'Y'].forEach(axis => {
document.getElementById(`offset${axis}`).oninput = (e) => {
document.getElementById(`val${axis}`).textContent = e.target.value;
this.updateSelectionBox();
};
});
}
async processFiles(files) {
document.getElementById('uploadSection').style.display = 'none';
document.getElementById('resultsSection').style.display = 'block';
document.getElementById('statusSection').style.display = 'block';
for (const file of files) {
const result = await this.smartProcess(file);
this.processedImages.push(result);
this.renderCard(this.processedImages.length - 1);
}
document.getElementById('statusSection').style.display = 'none';
}
async smartProcess(input) {
return new Promise((resolve) => {
const isFile = input instanceof File;
const load = (src) => {
const img = new Image();
img.onload = async () => {
const canvas = document.createElement('canvas');
canvas.width = img.width; canvas.height = img.height;
const ctx = canvas.getContext('2d', { willReadFrequently: true });
ctx.drawImage(img, 0, 0);
const config = (img.width > 1024 && img.height > 1024) ? { size: 96, margin: 64 } : { size: 48, margin: 32 };
const mask = this.loadedMasks[config.size];
const scanSize = 480;
const regions = [
{ x1: img.width - scanSize, x2: img.width - mask.size, y1: img.height - scanSize, y2: img.height - mask.size },
{ x1: 0, x2: scanSize, y1: 0, y2: scanSize },
{ x1: 0, x2: scanSize, y1: img.height - scanSize, y2: img.height - mask.size },
{ x1: img.width - scanSize, x2: img.width - mask.size, y1: 0, y2: scanSize }
];
let bestX = img.width - config.margin - config.size, bestY = img.height - config.margin - config.size, maxCorr = -Infinity;
for (const reg of regions) {
for (let y = reg.y1; y <= reg.y2; y += 4) {
for (let x = reg.x1; x <= reg.x2; x += 4) {
if (x < 0 || y < 0 || x + mask.size > img.width || y + mask.size > img.height) continue;
const score = this.calculateNCC(this.applySobel(ctx.getImageData(x, y, mask.size, mask.size)), mask);
if (score > maxCorr) { maxCorr = score; bestX = x; bestY = y; }
}
}
}
const imageData = ctx.getImageData(bestX, bestY, config.size, config.size);
this.reverseAlphaBlend(imageData, mask.data);
ctx.putImageData(imageData, bestX, bestY);
resolve({
name: isFile ? input.name : "Pasted_Img", originalUrl: src, currentUrl: canvas.toDataURL(),
config, lastOffset: { x: bestX - (img.width - config.margin - config.size), y: bestY - (img.height - config.margin - config.size) }
});
};
img.src = src;
};
if (isFile) {
const reader = new FileReader();
reader.onload = (e) => load(e.target.result);
reader.readAsDataURL(input);
} else load(input.originalUrl || URL.createObjectURL(input));
});
}
calculateNCC(area, mask) {
let aSum = 0, aSumSq = 0;
for (let v of area) { aSum += v; aSumSq += v * v; }
const aMean = aSum / area.length, aStd = Math.sqrt(aSumSq / area.length - aMean * aMean);
if (aStd === 0 || mask.std === 0) return -1;
let num = 0;
for (let i = 0; i < area.length; i++) num += (area[i] - aMean) * (mask.edges[i] - mask.mean);
return num / (area.length * aStd * mask.std);
}
reverseAlphaBlend(imageData, maskData) {
const data = imageData.data;
for (let i = 0; i < data.length; i += 4) {
let a = Math.max(maskData[i], maskData[i+1], maskData[i+2]) / 255;
if (a > 0) {
a = Math.min(a, 0.995);
const inv = 1 - a;
data[i] = Math.max(0, Math.min(255, (data[i] - a * 255) / inv));
data[i+1] = Math.max(0, Math.min(255, (data[i+1] - a * 255) / inv));
data[i+2] = Math.max(0, Math.min(255, (data[i+2] - a * 255) / inv));
}
}
}
renderCard(index) {
const item = this.processedImages[index];
const card = document.createElement('div');
card.className = 'result-card';
card.innerHTML = `<img src="${item.currentUrl}" class="result-thumb"><div class="result-info"><div>${item.name}</div><span style="color:var(--color-primary)">AI 已校準</span></div>`;
card.onclick = () => this.openEditor(index);
document.getElementById('resultsGrid').appendChild(card);
}
openEditor(index) {
this.editingIndex = index;
const item = this.processedImages[index];
document.getElementById('previewImage').src = item.currentUrl;
document.getElementById('offsetX').value = item.lastOffset.x;
document.getElementById('offsetY').value = item.lastOffset.y;
document.getElementById('valX').textContent = item.lastOffset.x;
document.getElementById('valY').textContent = item.lastOffset.y;
document.getElementById('editorModal').classList.add('active');
setTimeout(() => this.updateSelectionBox(), 100);
}
updateSelectionBox() {
const preview = document.getElementById('previewImage');
const box = document.getElementById('selectionBox');
const item = this.processedImages[this.editingIndex];
// 核心修正:獨立計算 X 與 Y 比例
const scaleX = preview.clientWidth / preview.naturalWidth;
const scaleY = preview.clientHeight / preview.naturalHeight;
const ox = parseInt(document.getElementById('offsetX').value);
const oy = parseInt(document.getElementById('offsetY').value);
const x = (preview.naturalWidth - item.config.margin - item.config.size + ox) * scaleX;
const y = (preview.naturalHeight - item.config.margin - item.config.size + oy) * scaleY;
const sw = item.config.size * scaleX;
const sh = item.config.size * scaleY;
box.style.display = 'block';
box.style.left = `${x}px`;
box.style.top = `${y}px`;
box.style.width = `${sw}px`;
box.style.height = `${sh}px`;
}
/**
* 核心 Y 軸點擊優化邏輯
*/
handleImageClick(e) {
const preview = e.target;
const item = this.processedImages[this.editingIndex];
const rect = preview.getBoundingClientRect();
// 1. 取得絕對視窗座標映射,解決 Y 軸因置中產生的偏移
const clickX = e.clientX - rect.left;
const clickY = e.clientY - rect.top;
// 2. 獲取精確比例
const scaleX = preview.naturalWidth / preview.clientWidth;
const scaleY = preview.naturalHeight / preview.clientHeight;
// 3. 映射至原始像素
const naturalX = clickX * scaleX;
const naturalY = clickY * scaleY;
// 4. 計算相對偏移
const baseStartX = preview.naturalWidth - item.config.margin - item.config.size;
const baseStartY = preview.naturalHeight - item.config.margin - item.config.size;
const offX = Math.round(naturalX - (item.config.size / 2) - baseStartX);
const offY = Math.round(naturalY - (item.config.size / 2) - baseStartY);
const limit = 3000;
const finalX = Math.max(-limit, Math.min(limit, offX));
const finalY = Math.max(-limit, Math.min(limit, offY));
document.getElementById('offsetX').value = finalX;
document.getElementById('offsetY').value = finalY;
document.getElementById('valX').textContent = finalX;
document.getElementById('valY').textContent = finalY;
this.updateSelectionBox();
this.autoAlign();
}
async autoAlign() {
const btn = document.getElementById('autoAlignBtn');
btn.disabled = true; btn.textContent = "吸附中...";
const item = this.processedImages[this.editingIndex], mask = this.loadedMasks[item.config.size];
const img = new Image(); img.src = item.originalUrl;
await new Promise(res => img.onload = res);
const canvas = document.createElement('canvas');
canvas.width = img.width; canvas.height = img.height;
const ctx = canvas.getContext('2d', { willReadFrequently: true });
ctx.drawImage(img, 0, 0);
const curX = img.width - item.config.margin - item.config.size + parseInt(document.getElementById('offsetX').value);
const curY = img.height - item.config.margin - item.config.size + parseInt(document.getElementById('offsetY').value);
let bestX = curX, bestY = curY, maxCorr = -Infinity;
const R = 100;
for (let dy = -R; dy <= R; dy++) {
for (let dx = -R; dx <= R; dx++) {
const tx = curX + dx, ty = curY + dy;
if (tx < 0 || ty < 0 || tx + mask.size > img.width || ty + mask.size > img.height) continue;
const score = this.calculateNCC(this.applySobel(ctx.getImageData(tx, ty, mask.size, mask.size)), mask);
if (score > maxCorr) { maxCorr = score; bestX = tx; bestY = ty; }
}
}
document.getElementById('offsetX').value = bestX - (img.width - item.config.margin - item.config.size);
document.getElementById('offsetY').value = bestY - (img.height - item.config.margin - item.config.size);
['X', 'Y'].forEach(a => document.getElementById(`val${a}`).textContent = document.getElementById(`offset${a}`).value);
this.updateSelectionBox();
this.applyManualOffset(false);
btn.disabled = false; btn.textContent = "精細吸附";
}
async applyManualOffset(download = false) {
const item = this.processedImages[this.editingIndex];
const ox = parseInt(document.getElementById('offsetX').value), oy = parseInt(document.getElementById('offsetY').value);
const img = new Image(); img.src = item.originalUrl;
await new Promise(res => img.onload = res);
const canvas = document.createElement('canvas');
canvas.width = img.width; canvas.height = img.height;
const ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0);
const sx = img.width - item.config.margin - item.config.size + ox, sy = img.height - item.config.margin - item.config.size + oy;
const mask = this.loadedMasks[item.config.size];
// 核心修正:將處理後的像素放回畫布
const imageData = ctx.getImageData(sx, sy, mask.size, mask.size);
this.reverseAlphaBlend(imageData, mask.data);
ctx.putImageData(imageData, sx, sy);
const finalUrl = canvas.toDataURL("image/png");
item.currentUrl = finalUrl;
item.lastOffset = { x: ox, y: oy };
document.getElementById('previewImage').src = finalUrl;
document.querySelectorAll('.result-card')[this.editingIndex].querySelector('img').src = finalUrl;
if (download) {
const a = document.createElement('a');
a.href = finalUrl; a.download = `clear_${item.name.replace(/\.[^/.]+$/, "")}.png`; a.click();
}
}
}
window.onload = () => window.app = new ClearNanoV3_2();
</script>
</body>
</html>
2. 核心技術原理
這套程式的運作邏輯可以分為四個階段:
A. 邊緣特徵提取 (Sobel Operator)
程式不直接比對顏色(因為背景會變),而是比對「形狀」。
- 原理: 使用 applySobel 函式將遮罩和目標圖片區域轉換為灰階,並計算像素間的梯度變化。
- 目的: 提取出浮水印的輪廓。這樣無論背景是紅色還是藍色,浮水印的「形狀邊緣」是固定的,能大幅提高偵測準確度。
B. 模板匹配與對位 (NCC 演算法)
為了找到浮水印到底在哪裡,程式使用了 歸一化相關係數 (Normalized Cross-Correlation, NCC)。
- 運作: 程式會在圖片的四個角落(regions 陣列定義的區域)進行掃描。
- 數學比對: calculateNCC 函式計算兩個影像塊的相關性。數值越接近 1,表示該位置與預設遮罩越吻合。
- 自動吸附: autoAlign 函式會在使用者點擊位置的周圍 100 X 100 像素內進行精細搜索,找到最強的特徵點並自動「吸附」上去。
C. 反向 Alpha 混合 (Reverse Alpha Blend)
這是「去除」浮水印的數學核心。假設浮水印是半透明的白色(RGBA):

- 這能將被白色遮罩遮住的像素點,根據遮罩的透明度(Alpha 值)重新計算回原本的顏色。
D. 座標映射 (Coordinate Mapping)
針對高解析度圖片,網頁上顯示的圖片(clientWidth)與實際像素(naturalWidth)不同。
- 程式透過 scaleX 與 scaleY 的比例計算,確保使用者在螢幕上點擊的位置,能精準對應到原始 2000px~3000px 圖片中的正確像素座標,解決了長圖點擊偏離的問題。
3. 功能模組說明

4. 總結
這是一個基於瀏覽器端(Client-side)的影像處理工具。請安裝於Http server (如 apache、IIS 等)。
您可以進行的操作:
- 準備好對應尺寸的浮水印底圖(放在 assets/ 目錄)。
上傳帶有該浮水印的圖片。程式會自動幫你找位置並「擦除」它。 - Gemini nano的可以自行下載: https://iaiguidance.com/remove/assets/bg_48.png
https://iaiguidance.com/remove/assets/bg_96.png


















