Word and color association have been investigated for a long time in cognitive science and in psychology.

We build a computational model trained on large scale corpus which map word to 3D color space.

**Type your word in the text box above.** We will colorize your word character by character and change background color.

Try

**Color Term**: Red, Green, Blue, Pink, Yellow, Dark Blue**Emotional Words**: Love, Anger, Happiness, Sadness, Relax**Food Names**: Tomato, Egg Plant, Rasberry, Blackberry, Blueberry**Bigram**: Mint Cream, Aqua Blue**Noisy words**: Deeeeeeep Blueeeee,**Abstract words**: Emo Princess etc.

Our word-to-color model is used to predict a color in Lab space given the sequence of characters in a colorâs name, $\boldsymbol{c} = \langle c_1, c_2, \ldots, c_{|\boldsymbol{c}|}\rangle$, where each $c_{i}$ is a character in a finite alphabet. Each character $c_i$ is represented by learned a vector embedding in $\mathbb{R}^{300}$. To compose these together, we use a bidirectional LSTM with 300 hidden units, and the resulting two representations (state and memory) are concatenated to yield a fixed-dimensional vector representation $\mathbf{h} \in \mathbb{R}^{600}$ of the sequence. The associated color value in Lab space is then defined to be $\hat{\mathbf{y}} = \sigma(\mathbf{W}\mathbf{h} + \mathbf{b})$, where $\mathbf{W} \in \mathbb{R}^{3 \times 600}$ and $\mathbf{b} \in \mathbb{R}^3$ transform $\mathbf{h}$.

To learn the parameters of the model (i.e., the parameters of the LSTMs, the character embeddings, and $\mathbf{W}$ and $\mathbf{b}$, we use reference color labels $\mathbf{y}$ from our training set and minimize squared error, $|| \mathbf{y} - \hat{\mathbf{y}}||^2$, averaged across the training set. Learning is accomplished using backpropagation and the Adam update rule.

All of our models use Long Short-Term Memory to read or generate sequence of characters. Let $\boldsymbol{c}=(c_{1}, c_2, \ldots, c_{n})$ be the characters in a word $w$ with length $n$. We also project all characters into a fixed $d$-dimensional vectors $\boldsymbol{x} = (\mathbf{x}_{1}, \mathbf{x}_2, \ldots, \mathbf{x}_{n})$, using a lookup table. The model encodes each token of the sentence from left to right according to the standard RNN recurrences:

\begin{align*}
\mathbf{i}_{t} &= \sigma(\mathbf{W}_{xi}\mathbf{x}_{t} + \mathbf{W}_{hi}\mathbf{h}_{t-1} + \mathbf{W}_{ci}\mathbf{c}_{t-1} + \mathbf{b}_{i})\\
\mathbf{f}_{t} &= \sigma(\mathbf{W}_{xf}\mathbf{x}_{t} + \mathbf{W}_{hf}\mathbf{h}_{t-1}+\mathbf{W}_{cf}\mathbf{c}_{t-1} + \mathbf{b}_{f})\\
\mathbf{g}_{t} &= \tanh(\mathbf{W}_{xc}\mathbf{x}_{t} + \mathbf{W}_{hc}\mathbf{h}_{t-1} + \mathbf{b}_{c})\\
\mathbf{c}_{t} &= \mathbf{f}_{t}\odot\mathbf{c}_{t-1} + \mathbf{i}_{t}\odot\mathbf{g}_{t}\\
\mathbf{o}_{t} &= \sigma(\mathbf{W}_{xo}\mathbf{x}_{t} + \mathbf{W}_{ho}\mathbf{h}_{t-1} + \mathbf{W}_{co}\mathbf{c}_{t} + \mathbf{b}_{o})\\
\mathbf{h}_{t} &= \mathbf{o}_{t}\odot\tanh(\mathbf{c}_{t})
\end{align*}

This yields a representation $\overrightarrow{\mathbf{h}_{t}}$, $\overrightarrow{\mathbf{c}_{t}}$ for each position in the sentence $t$ which can be interpreted as the representation of characters with its left context $c_{1}, c_2, \ldots, c_t$.
The concatenation of these two vectors is our word representation.
\begin{align*}
\mathbf{h}_{t} = [\overrightarrow{\mathbf{h}_{t}} ; \overrightarrow{\mathbf{c}_{t}}]
\end{align*}
Lab was originally designed so that Euclidean distances correlate with human-perceived differences. Lab is also continuous. Those properties is suitable for our purpose

- operation over predicted color correspond to human perception (e.g. purple - red = blue).
- optimization with gradient-base learning on L2-Loss.

We consider the task of predicting a color (in Lab space) given its name. Our dataset is a collection of user-named colors downloaded from COLORlovers, a creative community where people from around the world create and share colors, palettes, and patterns. Our dataset contains 776,364 pairs with 581,483 unique names.

We considered three held-out datasets from other sources; these do not overlap with the training data:

- ggplot2: the 141 officially-named colors used in ggplot2, a common package for the R programming language.
- Paints: The paint manufacturer Sherwin-Williams has 7,750 named colors.

We thank Lucas Beyer for very helpful comments and discussions.

Your feedback is highly appreciated and will help us to make this project more fun! Please send us your feedback to Color Lab.