Thanks for taking an interest in my article, and taking the time to comment! I think you are probably right that we can think of what neural networks can currently do as a kind of weak creativity. Of course there is a kind of curation on behalf of the practitioner working with the neural network, which is in some senses the final creative act of choosing what is or is not of potential cultural value.
Falling outside of what it has learnt is indeed the kind of thing I have in mind. In philosophy however alterity has a stronger sense in which could perhaps be thought of as an orthogonal difference from what currently exists. One that cannot be breached by any sort of analogous or combinatorial leap.
I wrote this article last spring, and since I have done much more work with generative network models (such as GANs, VAEs, etc). That work has honestly only made it more clear that in their current formulations generative models essentially just learn to reproduce current statistical patterns in the world, rather than produce any meaningfully new ones. Any apparent novelty comes from the intervention of the human in the process. Of course this could change, especially as the challenge of reproducing current patterns is more fully solved, and the “creative” limitations of that approach are made more apparent.