Conversation
Notices
-
Discovering_the_Secret_Language_of_Dalle.pdf
-
@rru142 @p @helene > he thinks we're continuousplease anthropomorphise at your own discretion
-
@helene @p > "it might try real hard to figure out a meaning"E. W. Dijsktra is looking at you.(Imagine a bearded old cheese citing EWD1036-20 and EWD1036-21...)"[...] My next linguistical suggestion is more rigorous. It is to fight the "if-this-guy-wants-to-talk-to-that-guy" syndrome: never refer to parts of programs or pieces of equipment in an anthropomorphic terminology, nor allow your students to do so. This linguistical improvement is much harder to implement than you might think, and your department might consider the introduction of fines for violations, say a quarter for undergraduates, two quarters for graduate students, and five dollars for faculty members: by the end of the first semester of the new regime, you will have collected enough money for two scholarships.The reason for this last suggestion is that the anthropomorphic metaphor —for whose introduction we can blame John von Neumann— is an enormous handicap for every computing community that has adopted it. I have now encountered programs wanting things, knowing things, expecting things, believing things, etc., and each time that gave rise to avoidable confusions. The analogy that underlies this personification is so shallow that it is not only misleading but also paralyzing.It is misleading in the sense that it suggests that we can adequately cope with the unfamiliar discrete in terms of the familiar continuous, i.e. ourselves, quod non. It is paralyzing in the sense that, because persons exist and act in time, its adoption effectively prevents a departure from operational semantics and thus forces people to think about programs in terms of computational behaviours, based on an underlying computational model. This is bad, because operational reasoning is a tremendous waste of mental effort.[...]"(I shall, however, note that this Dijkstra entity has zapped me so many times that it decided not to care for my existence any longer, thus saving energy... And of course "it tries to put a packet on the wire", damn...)
-
@p probably in the same way that we consider "bouba" as soft and "kiki" as spiky, the "language processing" layer's output might be tingling the "image creation" layer in the right waysit might probably have to do with how bird and insect pictures are almost always labelled by their latin names, hence making those weird latin-ish names, if they were massively present in the training datasetsit might try real hard to figure out a meaning in those (there often is, but it tends to be obtuse) and hence, interprets it that way
-
@helene Why these sequences of letters?
-
@p isn't that because it's pretty much its only way of input?
-
@helene It's interesting that it's into sequences of letters.
-
@p so they've figured out that a ML model gets funky when it sees letters it likes a lot due to how frequently they appeared in the training set? :akko_confused:
-
@rru142 @p @helene (have read through all dijkstra's stuff before here, yeh; just poking fun because he's silly curmudgeon sometimes
-
@shmibs @p @helene Oh, he elaborated on that "familiar continuous" before the paragraphs that I quoted. The whole essay is herehttps://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1036.html
-
@rru142 @helene @p (and that, if you stop trying to separate the two, you get instead discrete understanding that everything underneath is discretes, and analogues is just what happens when numbers are big, computer can do flops and have wiggly sims as well
-
@shmibs @rru142 @helene I have had a vision of Dijkstra berating a mechanic for using the phrase "choke the engine".