Bryan McCann

Portrait of Bryan McCann 
  • Schedule a Chat – Meeting new people and hearing their narratives are some of my favorite things, so don't hesitate to reach out

  • Twitter – follow for updates about AI research, startup life, and musings

  • LinkedIn – connect to discuss about working together

  • Google Scholar – publications and patents

  • Semantic Scholar – publications and patents

  • GitHub – most of my open source contributions are linked below through Salesforce repositories

Some things about me

I am Co-founder and CTO at You.com.

Previously, I was a Lead Research Scientist at Salesforce Research in Palo Alto where I worked on Deep Learning and its applications to Natural Language Processing. My job was mostly to advance the field of NLP and make difficult new directions seem possible and worthwhile to others in the field. I started as an IC, but increasingly began to collaborate until eventually I led my own NLP team.

My deep learning research started with an exploration of transfer learning, which led to contextualized word vectors. Descendants along this line include ELMo and BERT.

Then I turned towards unified text-to-text approaches to multitask learning, culminating in a call to action with the natural language decathlon, which has notable descendants in T5 and GPT-2/3.

Afterwards I branched out into many different directions: explainable NLP, controllable text generation, protein generation with language modeling techniques, and a good deal of summarization as well.

My philosophical research started with questions about meaning, existence, and aesthetics that I studied while pursuing a philosophy degree at Stanford University. My best work in this area at that time was a defense of conceptual analysis that repositioned it as a useful tool for learning rather than as a means of discovering truth.

My foray into deep learning for NLP changed my views on meaning many times over the years, and my philosophical side ventured into the philosophy of science – a good deal of Popper, Kuhn, and all the rest.

I'm also an avid reader of literature and essays. Italo Calvino is my favorite writer at this moment and has captured what it feels like to think like me more than any other. Proust is second, and a handful of other authors have at various times better represented past versions of myself.

I dabble in poetry and regularly write essays on philosophy, literature, and how they relate to what I learn in research, but so far I only share these pieces by reading aloud.

I read books on Set Theory, Topological Analysis, and other such topics for fun, cover to cover, doing all the exercises. I feel a great appreciation for the beauty of mathematics and have more than once been brought to tears of joy in understanding fundamental concepts in various areas – most recently while contemplating just outside St. Peter's Basilica.

My other primary hobby is Latin. I read it and never write down translations, but Latin phrases often inspire entire essays and evenings.

Some of my work in AI

Some things I said, some with recordings

Professional Service

Some thoughts in progress

“the interplay of mistakes and creativity” -- source unnamed until I know they are comfortable with being on this site

What a beautiful interplay it is (when embraced as such). Having spent a lot of my time trying to teach machines how to learn from mistakes, the role that mistakes play in learning has been a key theme of how I’ve been trying to understand language, learning, and conceptual shifts (both in individuals and in groups of individuals). I personally think that people tend to be much more creative when mistakes are considered in the light of play and learning.

I often see this in people as they grow with expertise: as they gain experience, they are able to consider problems with greater risks (for them, the company, and others they care about) as a form of play. It takes time and supportive relationships with others to build comfort with errors/mistakes in a way that allows for playful creativity in increasingly high pressure circumstances. A lot of people don’t get those kinds of relationships and end up carrying around concepts of mistakes laden with a gravity that hampers creativity. Perhaps some should train themselves more like AI, for which mistakes are always opportunities for learning. Unfortunately, I haven’t yet seen a good way to get AI to be more human in this respect: AI has no sense of which decisions carry greater risks or deserve the additional gravity — the humans always decide for them one way or another. Humans experience discomfort when they aren’t prepared to carry the weight of potential mistaken decisions; AI doesn’t, and it can be risky when humans offload that weight to AI too much.

“jokes and unexpressed truths” -- source unnamed until I know they are comfortable with being on this site

With so much of machine learning focusing on probabilistic expectations, it is fascinating to see how humans react to things that are unexpected and yet carry some truth. Even the humble pun plays with our expectations and turns what might be considered a mistake by our internal language model into an act of creativity. There’s something wonderful about how the expectation that humor will break some other expectation that it seems to open up the possibility for truths to sneak out in ways that we’re otherwise conditioned not to accept.

Here is perhaps a relevant story from my time training large language models:

I could get them to answer some basic questions. I could get them to translate between languages. I could get them to spin fantastical tales of unicorns. I could even get them to write poems that actually moved my collaborators to start reading poetry for the first time in their lives. But, I could not get them to tell jokes in any way that remotely felt human. The jokes generated were so bad, so egregiously awful, that we all sat around the computer late at night cracking up at just how un-funny the jokes were. Because we were expecting a certain kind of joke, the language model’s “anti-jokes” actually succeeded in making us laugh. I always felt there was an unexpressed truth in that night as we played with a language model as a way of playing with language, but I’ve not yet found the right joke to express it.

What makes a good joke?

“Is metaphor something only humans can do well because when machines do it that’s an error but when people do it they are learning??” -- source unnamed until I know they are comfortable with being on this site

I don’t know whether machines do metaphor surprisingly well, terribly, or not at all. And I don’t know how much I believe the potential deficiencies are artifacts of how we’re training them or are consequences inherent to my intuitions about metaphors.

On the one hand:

A great deal of all the fascinating things that we do as humans is grounded in how we notice similarities between things (and on the flip side dissimilarities) and then attend to those similarities. Typically, if I feel some kind of similarity, there is some truth to it and metaphor allows me to communicate that truth without previously agreed upon language for it. But, it is on me and those in my linguistic community to examine that metaphor and its “truth” more closely. The initial role of metaphor is to make me notice a similarity. The follow up role is to make me analyze the dissimilarity (how wrong the metaphor is and in what ways). This completes the learning process that we undergo when we humans do metaphor well.

In every AI I’ve made, I’ve imbued it with some notion of similarity (in a metaphorical sense) in the form of operations like the vector dot product. Whenever these models are wrong, they too adjust their internal representations of things so that their internal similarity/dissimilarity operations are more likely to predict the right outcomes. I know it is itself quite metaphorical, but this form of learning from errors (based on predictions that are in turn based on similarity operations) feels similar to the way we learn. This suggests that at least some of the basic functional pieces are there for future AI to do metaphor well.

On the other hand:

With these machines, if we ask them to write metaphors, they can often seem hollow to us -- this is what I think of when you suggest machines don't do metaphor well. I think this is because we expect metaphors to show us something surprising or at the very least something we don’t already know, making connections between things that previously seemed far apart in our conceptual network. Machines are often trained on what humans have already written down to predict words (and metaphors) that are most likely. In some ways the most likely corresponds to the least surprising, and hence the least useful for changing our conceptual framework, the least likely to trigger some learning in humans. Because the concepts of error and surprise are so intertwined with the way most of these machines are currently trained, it does seem like we’ve written a bad recipe for machine-generated metaphors.

I don’t know that this particular deficiency is inherent to machines; I tend to think it is just an artifact of how we’re training them.

But there is something else that bothers me. As with jokes, machine-generated metaphors can seem completely hollow and awful 9/10 times and yet on the 10th time they touch on something that makes me pause and think. I notice a bit of truth. I walk away with a sense that meaning was created.

Where did the meaning come from? At what point was a metaphor made well? Or made at all? Is the machine just another eye or ear -- another way for my brain to get access to correlations? I’m biased towards thinking that only we make meaning and metaphor. Even when the machines happen to make jokes that make us laugh or metaphors that make us think, it still feels like we humans are the source of meaning.

I say this is my bias because it might be inherent to the fact that I’m thinking of these things as “machines”. As soon as I do that, I am dehumanizing them and perhaps removing their capacity for creating meaning within any interpretation of events generated by my own conceptual framework. By calling them machines, I’ve excluded them from my linguistic community and precluded ever thinking of them as creating metaphors at all. We do this all the time to other living things, but this may very well be a form of parochial solipsism.

And now we’re back to the question of how a machine could convince you, on that deep, conceptual level that it is not a machine but deserves your recognition as a co-creator of meaning within your conceptual and linguistic community. I don’t know the answer to that, but it might directly determine the answer to your question.

A multithreaded metaphor for the polyvalent Self

As an operating system

We are always running multiple threads. There are many versions of you inside your mind. Their concerns, their memories, their interests are in some ways shared, but in others partitinoned and compartmentalized.

When you undertake a task or aim to learn or grow, try to have your undertaking and actions generally speaking satisfy multiple threads. Everything you are learning should teach you things on multiple levels and in many ways -- if it does not, look at it from the perspective of another thread, another self within your Self.

Align as many threads as you can into as few threads as possible and you will feel increasingly in sync with yourself. Nonetheless, acknowledge that unaligned threads vary in their requirements at different times. Some will occasionally require more processing power, more attention. At times, some will feel starved for resources. Try to minimize this by feeling the natural rhythm and regularities by which these threads appear to manifest their needs. Either find things in life that naturally align the interests and feed competing threads or balance the resources they need across the different aspects of your life.

As a collective of selves rather than a single Self

There are many different selves within the Self. One wants to read literature, the other poetry, yet another to play guitar, another to eat, another to sleep. Some doubt the others. Others encourage and support yet others.

Because resources are scarce, many of these selves compete. When the competition is too tight or the managing self -- the Ego that sees the other selves as others -- is too neglectful or distributes resources without prudence, the notion of a single Self begins to dissolve. The fractured Self is then experiend as multiple selves. Often the Shadow-self represents the neglected aspects of the self, the hurt selves, in direct communication with the Ego-self. Becuase the Shadow-self is in this way the counterpoint to the Ego-self, the latter can also be associated with the waking self or the self that operates in the light.

As the self in the light, it must also manage receiving light, or stimulus, from the external world, and it must also manage the projections of the Self and its many aspects out onto the world. As the Ego-self loses the trust and loyalty of the other aspects of the Self, these outward projections also become harder to disentangle from the inwardly received projetions. This is in many ways why an immature Ego-self, one that poorly manages the many aspects of the Self, or one which is unfamiliar with the many aspects of the Self and thus neglects them, suffers an existence in which the inward selves rage against it or the outer world of others projects a dictated existence and sense of reality upon it. Direction in these times is volatile and highly variable depending on the varying projections onto the Ego-self. This disorientation should certainly be avoided by attending to the many selves.

Somewhere in between the extremes of a unified Self and a fractured Self (a factional plurality of selves), during a recovery post-fracture or when resources first start being distributed poorly, there is the experience of the collective of selves -- a fellowship of characters with some rough edges and rough relations to others that need to be smoothed out by the managing Ego-self.

The managing Ego-self can resolve conflicts through attention to the needs of the others, negotiation, conversation, learning about how to align the skills and interests of the other selves for the sake of the whole. As a unified, mono-Self begins to appear, the fellowship, council, or pantheon dissolves into it. The dissolution of the council is often accompanied by the experience of the collective becoming less real as such. They begin to feel like they have less well-defined boundaries, some merge with others, and they generally feel more like fictional, psychological artifacts than entities that deserve the title of real existence. Even the shadow begins to take on the appearance of its counterpoint within the mind's eye after previously taking on the appearance of infinite darkness, haunting monster, or dark knight.

The mannaging, waking Ego-self often experiences the final re-unification of the many into one as an encounter with a divine, all powerful being. As the reality of the many selves is rescinded, those rights are agglomerated into a reality of the one Self that is the Father-Self to the Ego-self transformed into Son-self by seeing the Eyes of God manifesting divine fire. Such overpowering light the Ego-self comes to know by nurturing that which glows during the darkest times.