You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

58 lines
1.9 KiB
HTML

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<script src="labels.js" defer></script>
<script src="picture.js" defer></script>
<script src="panels.js" defer></script>
<link rel="stylesheet" href="style.css" />
<title>Concrete Label</title>
</head>
<body>
<main id="container">
<figure class="background-container">
<input type="file" />
<img id="background-image" draggable="false" src="#" />
</figure>
<div id="editor"></div>
<div class="text-input">
<form class="modal">
<input id="input" placeholder="Describe this area" type="text" />
<button id="insert" type="submit">Insert</button>
<button id="cancel">x</button>
</form>
</div>
</main>
<nav>
<button id="show-transcription">...</button>
<button id="show-info">?</button>
</nav>
<aside class="info" id="info-panel">
<button class="close">X</button>
<h1 class="title">Concrete 🎏 Label</h1>
<p>
How could computer read concrete & visual poetry? How does computer navigate through
text objects in which layout and graphical elements play a fundamental role?
</p>
<p>
With this tool you can upload an image and then annotate it spatially. In doing so
you generate a transcription of the image that keeps track of the order of your
annotations (and so the visual path you take when reading the image), as well as
their position and size. (wip 👹)
</p>
<p>
Neither the image nor the labels nor the transcription will be uploaded online.
Everything happen in your browser.
</p>
</aside>
<aside class="transcription" id="transcription-panel">
<button class="close">X</button>
<h1 class="title">Label Transcription</h1>
</aside>
</body>
</html>