Algebra 2 - 2015/16 - Durham
Contact
Office hours
- Wednesday 11:30-12:30
- Thursday 13:30-14:30
Overview
The 'big idea' is that the objects which occur in many mathematical problems have extra (possibly well hidden) structure. By finding and utilising this extra structure we can use new tools and techniques to find 'better' solutions. The multilated chessboard problem is an great example of this idea.
The Algebra 2 course is designed to give an introduction to more general abstract algebraic objects such as rings, fields and groups. Last year, in Linear Algebra you had a brief encounter with some group theory when studying modular arithmetic, matrix groups, and permutations. So in this course we can deepen your knowledge of group theory and investigate the more subtle properties of groups.
The prototypical example of a ring is the integers \( \Z \). Much of ring theory can then be seen as an attempt to investigate the essential properties of \( \Z \), as a way to generalise arguments (about divisibilty, primality, irreducibility, GCDs, ...) to other settings. Noteably this leads to the field of algebraic number theory which (predominantly) investigates solutions to Diophantine equations.
Group theory, on the other hand, studies the symmetries of things. These might be explicit symmetries of some geometric object. They can also be more abstract symmetries like \( \sqrt{2} \mapsto -\sqrt{2} \), which interchanges the roots of \( 1 \pm \sqrt{2} \) of \( x^2 - 2x - 1 \). Studying the symmetries of solutions to polynomial equations gives rise to the field of Galois theory, and deep results about (un)solvability of degree 5 polynomials in radicals. With some knowledge of group theory, you can even discover your own solution to the Rubik's cube by working (implicitly) with the Rubik's cube group!
Slides from tutorials
- Slides: Revision Tutorial 2 - Group Theory revision
- Slides: Revision Tutorial 1 - Ring Theory revision
- Slides: Tutorial 2 - Polynomial irreducibility
- Slides: Tutorial 3 - Quotient rings and ideals
- Slides: Tutorial 4 - Prime and maximal ideals
- Slides: Tutorial 5 - Checking for groups
- Slides: Tutorial 6 - Symmetric and Cyclic groups
- Slides: Tutorial 7 - Distinguishing and identifying groups
- Slides: Tutorial 8 - Group actions and Conjugation
- Slides: Tutorial 9 - Finitely generated abelian groups
Handouts from tutorials
- Handout: Tutorial 2 - Eisenstein and the discriminant
- Handout: Tutorial 2 - Polynomials reducible modulo every prime \( p \)
- Example: Tutorial 3 - For polynomials over not-fields, repeated eliminating high powers of \( x \) in quotients might not work.
- Handout: Tutorial 4 - Finding inverses in \( F[x] / I \) using the Euclidean algorithm
- Handout: Tutorial 5 - Redundancy in the group axioms
- Handout: Tutorial 7 - Are groups \( \R \) and \( \R \times \R \) isomorphic?
Tutorial summaries
- Tutorial 9 - Week 19 - 15 Mar 2016 - Finitely generated abelian groups
- Tutorial 8 - Week 17 - 1 Mar 2016 - Group actions and Conjugation
- Tutorial 7 - Week 15 - 16 Feb 2016 - Distinguishing and identifying groups
- Tutorial 6 - Week 13 - 2 Feb 2016 - Symmetric and Cyclic groups
- Tutorial 5 - Week 11 - 19 Jan 2016 - Checking for groups
- Tutorial 4 - Week 9 - 08 Dec 2015 - Prime and maximal ideals
- Tutorial 3 - Week 7 - 24 Nov 2015 - Quotient rings and ideals
- Tutorial 2 - Week 5 - 10 Nov 2015 - Polynomial irreducibility
- Tutorial 1 - Week 3 - 27 Oct 2015 - Introduction to rings
Revision Tutorial 2 - Week 21 - 3 May 2016 - Group theory revision
- Slides: Revision slides 2
We covered Question 9 from the 2012 Exam. The slides have some more material about question 10iii,iv) from 2014, and question 4iii) and question 5i,ii) from 2012, which might be helpful.
Revision Tutorial 1 - Week 20 - 26 Apr 2016 - Ring theory revision
- Slides: Revision slides 1
We covered Question 7 from the 2012 Exam, and Question 8 from the 2014 Exam.
Tutorial 9 - Week 19 - 15 Mar 2016 - Finitely generated abelian groups
- Slides: Warmup/recap
Here are the slides from the recap and warmup questions. Firstly I reminded you of the statement of the Fundamental Theorem for Finitely Generated Abelian groups, which states (roughly) that every such group is a product of cyclic groups. Then we talked about the method by which we take a group defined by generators and relations, and attempt to write it in this special form.
I tried to motivate why the integer row and column operations are the right thing to use - they correspond to manipulating relations, and subsituting to change generators. Notice \( 2x = 0 \) and \( x = 0 \) are not equivalent in a group (e.g. \( \Z/2 \)), so we can only multiply a relation by \( \pm 1 \). Then repeatedly subtracting one relation from another gives only integer multiplies on the row operations. Also since \( \langle 2x \rangle \neq \langle x \rangle \), we can only substitute a generator with \( x = \pm x' \), and add/subtract integer multiples of other generators.
Extra: The following notes on Finitely Generated Abelian Groups, part of Christopher Cooper's Group Theory course, give a good explaination of Finitely Generated Abelian Groups. (Be aware the notation is different from our course!) Section 10.3 talks about column operations in terms of substituting new generators. In Section 10.4, Theorem 2 proves that every marix can be made diagonal using elementary row and column operations, by giving a recursive procedure. You can see an instance of this in Example 9 directly following Theorem 2. Section 10.8 gives a nice example of where Finitely Generate Abelian Groups appear elsewhere in mathematics: the Alexander Group of a Knot. If you're doing Geometric Topology, this might give some motivation. You can read more about the Alexander Group, in Cooper's Topology notes.
Extra: Another reason why Finitely Generated Abelian Groups are worth investigating is because they are one of the only general classes of groups that we fully understand. Non-abelian groups are difficult: there are 49,487,365,422 groups of order \( 2^{10} = 1024 \), but only 42 of these are abelian. So trying to classify non-abelian groups is a mammoth task, there is no good explicit classification in this case. But we have FTFGAG in the abelian case.
- Q1: Here we have to find the torsion coefficiens of a group \( G \), so put the group in the form \( \Z/d_1 \times \cdots \times \Z/d_k \), where \( d_i \mid d_{i+1} \).
- Everyone had the right idea initially. Use \( \Z/nm \cong \Z/n \times \Z/m \) when \( \gcd(n,m) = 1 \), to split everything up into prime-power factors \( \Z/p^n \).
- How should we recombine the factors? What's the strategy?
- Think about the case where only one factor \( \Z/3 \) appears (and no higher powers).
- Suppose I have the result \( G \cong \Z/d_1 \times \Z/d_2 \), with \( d_1 \mid d_2 \). Can I have that \( 3 \mid d_1 \)?
- No, because then I would conclude \( 3 \mid d_2 \) also, and so at least two factors of \( \Z/3 \) now appear!
- So the idea is that the highest power of each prime appears in the last factor.
- The general strategy: write increasing prime powers \( p^n \) in row \( p \). Right-align the result. This setup guarantees each column divides the next. So take the product of entries in each separate columns to get the torsion coefficients.
- For \( G = \Z/10 \times \Z/36 \times \Z/14 \times \Z/21 \) we have \begin{array}{c|cccc} p & & p^n & \\ \hline 2 & 2 & 2 & 2^2 \\ 3 & & 3 & 3^2 \\ 5 & & & 5 \\ 7 & & 7 & 7 \end{array}
- First column give \( \Z/2 \), second gives \( \Z/42 \), third gives \( \Z/1260 \). So \( G \) is the product of these.
- Q4: This question asks about a finite abelian group whose order is not divisible by a square. We have to show it is necessarily a cyclic group.
- Think about the unique form in FTFGAG, where \( d_1 > 1 \) and \( d_i \mid d_{i+1} \). Since the group is finite, \( r = 0 \).
- What does a cyclic \( \Z/n \) group look like in this form? It is just \( \Z/d_1 \), where \( d_1 = n \).
- So we could try to show that if no square divides \( \# G \), then there is only one torsion coefficient.
- What happens if we have two torsion coefficients, \( d_1 \) and \( d_2 \)?
- The group \( G = \Z/d_1 \times \Z/d_2 \) has size \( d_1 \times d_2 \).
- If \( d_1 \mid d_2 \), then the square \( d_1^2 \mid d_1 \times d_2 = \# G \). But we assumed this does not happen.
- There cannot be \( \geq 2 \) torsion coefficients. Hence we have one torsion coefficient \( d_1 \). Then what is \( G \)?
- Well \( G \cong \Z/d_1 \), which is a cyclic group!
-
Q6/7: We covered Q6 in the warmup. Q7 is very much the same, but just with a bigger matrix because there are more relations.
- There is an algorithm you can follow to reduce many matrix to diagonal form with integer row and column operations.
- In the examples that arise this course, 'trial-and-error' by hand is enough. But try to make use of rows and columns which only have a few non-zero entries first.
- You can reduce to FTFGAG form where \( d_i \mid d_{i+1} \), using row and column operations. You could also do this as a clean-up step after writing down the group using any diagonal form. (Compare with Q1.)
Tutorial 8 - Week 17 - 1 Mar 2016 - Group actions and Conjugation
- Slides: Warmup/recap
Here are the slides from the recap and warmup questions. We recalled the definition of a group action, and some important concepts associated to it, namely orbits and stabilisers. We also recalled (a corollary) of the Orbit-Stabiliser Theorem which relates the size of the orbit, the size of the stabiliser and the size of the group.
If you're still not convinced group through might be useful, try reading the following short discussion 'Why is group theory important?'.
- Q2: We covered part i) in the warmup. I suggested leaving part ii) until after some other questions.
- Since conjugate elements have the same order, a good first step for finding conjugacy classes can be to gather the elements of the same order.
- This might not always be the best thing to do. But it can help reduce the number of different pairs you have to check are conjugate.
- Remembering the Orbit-Stabiliser Theorem can make determining the stabilisers easier.
- We found some 'easy' elements in \( G_g \). Under conjugation we always have \( g^k \in G_g \), since \( g^k g g^{-k} = g^{k+1-k} = g \).
- In each case these elements were enough to make the stabiliser the right size from Orbit-Stabiliser Theorem, so they must be the stabiliser exactly.
- Q3: After remembering the definition of the centre \( Z(G) \), people were generally okay with this question. The main issue was that people thought their solution was `too easy' to be a correct proof.
- Being able to analyse your own solutions, and check it is (very probably...) correct is an important skill to develop.
- It can also help to write down as much explanation as possible when solving a question. Imagine you were trying to explain your solution to a friend. What would you say at each step to convinece your friend the solution is correct? Whatever you say should probably be written down too.
- If in doubt, carefully write out the definition of the object you are working with. Here \( \ccl_G(z) = \{ gzg^{-1} \mid g \in G \} \). But we always have \( z \in \ccl_G(z) \), because \( z = eze^{-1} \).
- So if \( \ccl_G(z) \) has size 1, it must just be \( \ccl_G(z) = \{ z \} \).
- Now pick some arbitrary \( g \in G \), then we get \( gzg^{-1} \in \ccl_G(z) = \{ z \} \), so that \( gzg^{-1} = z \). Rearranging shows that \( gz = zg \). Since \( g \) was arbitrary \( z \) must commute with every element of \( G \), so \( z \in Z(G) \).
- And if \( z \in Z(G) \), we have that \( zg = gz \) for any \( g \in G \). So if we conjugate \( z \) by \( g \), we get \( gzg^{-1} = zgg^{-1} = z \).
- This shows that \( \ccl_G(z) = \{ gzg^{-1} \mid g \in G \} = \{ z \} \). So the conjugacy class has size 1.
-
Q6: Lots of people found the phrasing of this question awkward, and initially didn't understand what was being asked. Certainly the use of "points in some orbit", rather than "elements of some fixed orbit", caused problems.
- So we should start by fixing some orbit \( O = G(x) \). We need to show that \( G_x \) is normal if and only if \( G_y = G_x \) for every \( y \in O \).
- If \( O = G(x) \), how can any \( y \in O \) be written? Since \( O = G(x) = \{ g(x) \mid g \in G \} \), we have \( y \in O \) if and only if \( y = g(x) \), for some \( g \in G \).
- Recall the result from Proposition 8.16, which says \( G_{g(x)} = g G_x g^{-1} \). So the stabiliser of \( g(x) \) can be related to the stabiliser of \( x \) in a very specific way.
- Now recalling the definition of a normal subgroup should be enough to solve the question.
Tutorial 7 - Week 15 - 16 Feb 2016 - Distinguishing and identifying groups
- Slides: Warmup/recap
- Handout: Are groups \( \R \) and \( \R \times \R \) isomorphic?
Here are the slides from the recap and warmup questions. We covered a list of properties which two isomorphic groups have to share. This gives us a useful tool for distinguishing different groups. But I warned you that the properties matching is NOT enough to conclude the groups are isomorphic. You need a proof.
At the end I gave out a handout which deals with showing the groups \( \Q \) and \( \Q \times \Q \) are not isomorphic. And showing that \( \R \) and \( \R \times \R \) are isomorphic (if you accept the Axiom of Choice).
- Q2: A somewhat common issue appeared when calculating the sizes of \( \Z/p \times \Z/p \) and \( \Z/p^2 \), and deciding the group operation.
- As a set \( \Z/n = \{ \overline{0}, \overline{1}, \ldots, \overline{n-1} \} \), so it always has \( n \) elements. This means \( \Z/p \times \Z/p \) has \( p^2 \) elements, as does \( \Z/p^2 \).
- Since mathematicians are lazy, they often don't specify the group operation if it is a `standard' and well-known one. You should know/be able to infer it.
- You know two operations \( + \) and \( \times \) on \( \Z/n \). (Together these make \( \Z/n \) into a ring.) Since \( \overline{0}^{-1} \) does not make sense, \( \times \) cannot possibly make \( \Z/n \) into a group. So the operation which makes this into a group is \( + \).
- After sorting this out, people corretly settled on the idea that \( \Z/p \times \Z/p \) is non-cyclic. While \( \Z/p^2 \) is cyclic. The same method as in the warmup, can show \( \Z/p \times \Z/p \) is non-cyclic.
- Another way to see this is to realise that \( p(a,b) = (pa, pb) = (\overline{0},\overline{0}) \), so every element of \( \Z/p \times \Z/p \) has order \( \leq p \). But \( \overline{1} \) in \( \Z/p^2 \) has order \( p^2 \).
- Q4: This was covered in the warmup. I showed \( \Z \) is cyclic, but \( \Z \times \Z \) is non-cyclic to distinguish them. One student suggested another nice solution, as follows.
- If \( H \) is any non-trivial (normal) subgroup of \( \Z \), then \( \Z/H \) is finite. This is because such a subgroup \( H \) must be cyclic, and so is generated by \( d \in \Z \). Then \( \Z/H \cong \Z/d \) has size \( d \).
- On the other hand \( \Z \times \{ 0 \} \) is a non-trivial (normal) subgroup of \( \Z \times \Z \), and \( \Z\times\Z / \Z \times \{ 0 \} \cong \Z \). For this use the First Isomorphism Theorem for GroupsĀ· (Essentially quotienting by \( \Z \times \{ 0 \} \) kills the first factor of \( \Z \) in \( \Z \times \Z \).)
- Since the quotient \( \Z\times\Z/\Z\times\{0\} \) is infinite, but every quotient of \( \Z \) is finite, we cannot have that \( \Z\times\Z \cong \Z \).
- Q5: The idea that \( G \times H \) is cyclic implies \( G \) and \( H \) are both cyclic was 'obvious' to lots of people, and they were able to give verbal arguments for this. The difficulty came in knowing what to write for the proof.
- In this situation, go back to the definitions and use them as carefully as possible. Similarly use the hypotheses of the question as carefully as possible. It is often a case of just writing them out mathematically.
- What does it mean for \( G \) to be cyclic? Well, it means \( G \) is generated by one element.
- But formally this means there is some \( g_0 \in G \) such that for every \( g \in G \), we have \( g = g_0^k \) for some \( k \in \Z \).
- We're told \( G \times H \) is cyclic, so let \( (g_0, h_0) \in G \times H \) be a generator for \( G \times H \). I can pick any \( g \in G \), and any \( h \in H \) and get that \( (g,h) \in G \times H \). (I have to use \( G \times H \) somewhere...)
- Since \( G \times H \) is cyclic, how can I write \( (g,h) \)? Well, \( (g,h) = (g_0,h_0)^k \) for some \( k \in \Z \). But then \( (g_0, h_0)^k = (g_0^k, h_0^k) \) - recall how direct product works?
- But this gives me \( (g,h) = (g_0^k, h_0^k) \), so we have \( g = g_0^k \) and \( h = h_0^k \) for some \( k \in \Z \).
- Since \( g \in G \) was arbitrary, this shows \( G \) satisfies the definition of being generated by \( g_0 \), which means \( G \) is cyclic. And the same for \( H \).
- Q6: I think only a few people tried this question.
- The first part, showing \( A \times B \) if a subgroup of \( G \times H \), when \( A \) is a subgroup of \( G \) and \( B \) is a subgroup of \( H \), should be a fairly mechanical check of the subgroup axioms.
- Make sure you are searching for the right thing in the second part. If \( A \times B \) is a subgroup of \( G \times H \), then necessarily \( A \) is a subgroup of \( G \) and \( B \) is a subgroup of \( H \). (Why?)
- So you want to search for a subgroup of \( \Z \times \Z \) which is not even a product. Think back to my explaination/proof that \( \Z \times \Z \) is not cyclic.
Tutorial 6 - Week 13 - 2 Feb 2016 - Symmetric and Cyclic groups
- Slides: Warmup/recap
Here are the slides from the recap and warmup questions. I tried to remind you about permutations, including how to multiply them, how to write them as disjoint cycles, and how to write them as transpositions. I also reminded you about orders of elements, and that the order of an element divides the order of the group.
- Q2: Remember permutations are functions, they multiply from right to left.
- To find out how a product \( \rho \circ \sigma \) of permutations acts, only the final output is relevant. The intermediate value that \( \sigma(1) = k \), say, is not part of the result.
- The decomposition into disjoint cycles is essentially unique. The cycles can be swapped, since disjoint cycles commute. And every cycle can be shifted around since \( (1 \; 2 \; 3 \; 4) = (2 \; 3 \; 4 \; 1) = \cdots \). But this is all.
- You can leave cycles of length one out of the final answer, so \( (1 \; 2 \; 3)(4) = (1 \; 2 \; 3) \).
- The decomposition into transpositions is very not unique. For example \( (1 \; 2) (2 \; 3) (1 \; 2) = (1 \; 3) \). What is unique is whether the number of transpositions is odd or is even.
- Q3: There were some problems with how to initially interpret this question, so take care.
- For any \( g \), you must show \( gHg^{-1} \) is a subgroup, but \( g \) is fixed in each separate case.
- Then make sure you understand what the notation \( gHg^{-1} \) means. It is the set \[ gHg^{-1} = \{ ghg^{-1} \mid h \in H \} \]
- So you can pick two elements \( x = gh_1g^{-1} \) and \( y = gh_2g^{-1} \in gHg^{-1} \). Then \( h_1, h_2 \in H \) may be different, but it is the same \( g \) both times.
- Once you have the right set-up, checking the subgroup criteria should be straightforward.
- Q4: Lots of people came up with their own (correct!) way of counting the cycles. This is good; it shows understanding.
- Some approaches only worked well for counting \( n \)-cycles in \( S_n \), so take care.
- There is a `standard' way to count \( k \)-cycles in \( S_n \)
- A \( k \)-cycle looks like \( (a_1 \; a_2 \; \cdots \; a_k) \). How many choices for \( a_1 \)? There are \( n \). Then how many remaining choices for \( a_2 \)? Well, \( n - 1 \) since \( a_2 \neq a_1 \). And so on.
- But how many times have we counted the same \( k \)-cycle? Since \[ (a_1 \; a_2 \; \cdots \; a_k) = (a_2 \; a_3 \; \cdots \; a_k \; a_1) = \cdots \] the same \( k \)-cycle appears \( k \) times. So divide the result by \( k \).
- Q5: I don't know if anyone manage to look this quesiton. First, you must figure out what you actually need to show. It is always good look back at the definitions.
- What is the subgroup \( \langle g \rangle \)? By definition \( \langle g \rangle = \{ g^i \mid i \in \Z \} \).
- If \( g \) has order \( n \), then what is \( g^{n+1} \) equal to? And \( g^{n+2} \)? Do you ever get anything new(?)
- Use the divison algorithm to make this precise: write \( m = qn + r \), for \( 0 \leq r \lt n \). Then \( g^m = g^r \) (why?).
- So we can reduce to \( \langle g \rangle = \{ g^i \mid 0 \leq i \lt n \} \). Are these elements all different? If \( g^r = g^s \), with \( 0 \leq r, s \lt n \), find a contradiction with the assumption that order of \( g \) is \( n \).
- Q7: Make sure you've got the elements of \( (\Z/n)^\times \) correct. Otherwise you make lots of extra (and incorrect) work for yourself.
- Recall Proposition 4.5 from last term(!), which says \( (\Z/n)^\times = \{ \overline{d} \mid \gcd(d, n) = 1 \} \). This means \( (\Z/30)^\times \) only has eight elements. Not 29...
- To save work, you can use Lagrange to limit the possible orders to divisors of \( \# G \).
- To find an element of maximal order seems generally difficult. Since \( \Z/19 \) is a field, some general theory says \( (\Z/19)^\times \) is a cyclic group. So the maximal order is 18.
- But finding that \( \overline{2} \) is a generator means finding a primitive root mod \( p \), and there is no (known) good way to do this.
- There are plenty of other order 18 elements in \( (\Z/19)^\times \). For example \( \overline{2}^5 = \overline{13} \) works, since \( \gcd(5,18) = 1 \). Realise \( \overline{13}^r = \overline{1} \) means \( \overline{2}^{5r} = \overline{1} \). And the only time \( 18 \mid 5r \) is when \( 18 \) already divides \( r \).
- When computing things like \( \overline{2}^9 \pmod{19} \), it is easier to break it into small steps, and reduce the answer mod 19 before carrying on. You can also use negatives.
- For example, \( \overline{2}^9 = \overline{2} \cdot (\overline{2}^4)^2 \). But \( \overline{2}^4 = \overline{16} = \overline{-3} \pmod{19} \). So \[ \overline{2}^9 = \overline{2} \cdot \overline{-3}^2 = \overline{18} \pmod{19} \].
Tutorial 5 - Week 11 - 19 Jan 2016 - Checking for groups
- Slides: Warmup/recap
- Handout: Redundancy in the group axioms
Here are the slides from the recap and warmup questions. This tutorial was mostly meant to be a recap of the group theory from first year Linear Algebra. Before jumping into the definition of a group, I tried to motivate it as a way to mathematically describe symmetries (using a cube as an example).
We spent most of the time checking (tediously) whether or not various things were groups. For the most part, there were no real problems.
I also gave out a handout which shows how the group axioms contain some redundancies. With three short questions, you will show how to derive the group axioms from a `right-sided' only version. Or from a 'left-sided' only version.
- Q1 (13): Generally associativity is difficult to check. Unless you have a good reason why the operation is associative (for example checking a subgroup of a bigger group), then all you can do is check every triple for associativity.
- If any of \( x \), \( y \) or \( z \) is the identity, associativity holds because \( x \circ (y \circ z) \) and \( (x \circ y) \circ z \) reduce to the product of the other two elements.
- So here we only need to check one case: does \( (b \circ b) \circ b = b \circ (b \circ b) \)?
Checking associativity: Associativity cannot be read off visually from the multipication table. Even if every row/column has no repetitions, as is the case for groups, associativity is not guaranteed. For example \begin{array}{c|ccccc} \circ & e & w & x & y & z \\ \hline e & e & w & x & y & z \\ w & w & e & y & z & x \\ x & x & z & e & w & y \\ y & y & x & z & e & w \\ z & z & y & w & x & e \end{array} Here \( e \) is the identity element, and every element is self inverse \( g^{-1} = g \). But the operation is not associative because \( w \circ (w \circ x) = w \circ y = z \) and \( (w \circ w) \circ x = e \circ x = x \). In this example, associativity fails in a total of 32 cases!
To give you an idea of how difficult associativity is to check for \( \Z \), here is a proof that addition of natural numbers is associative. The proof uses the Peano axioms to define \( \mathbb{N} \), and then proves addition is associative using indiction. And now you must define \( \Z \) in terms of \( \mathbb{N} \), to use this result.
- Q1 (15): This is a slighly more interesting example, since it is an unfamiliar operation on a familiar set.
- Closure is straightforward to check, because the the operation just involves addition of integers.
- To find the identity you can solve the equation \( a = e \circ a = e + a + 1 \), to find \( e = -1 \). Then check this works the other way \( a = a \circ e \)?
- To find the inverse \( a' \) of \( a \), solve the equation \( -1 = a \circ a' = a + a' + 1 \), to get \( a' = -2 - a \). Then check this works the other way \( a' \circ a = -1 \)?
- For associativity, we can't directly appeal to associativity in a bigger group, since the operation is unfamiliar.
- Instead just compute directly \( a \circ (b \circ c) \) and \( (a \circ b) \circ c \) using the definition, and check if they are equal using the associativity of \( \Z \).
Tutorial 4 - Week 9 - 08 Dec 2015 - Prime and maximal ideals
Here are the slides from the recap and warmup questions. I started out with a recap of the definition of prime and maximal ideals. The idea of a prime ideal is a kind of generalisation of a prime number, with \( a \in I \) corresponding to \( p \mid a \). A maximal ideal can be thought of as `larger' than all comparable ideals if we think of \( \subset \) as `smaller'.
Then I recalled an important result (Proposition 15.4) which connects these type of ideals to specific behaviour in the quotient ring. An ideal \( I \subset R \) is prime if and only if \( R/I \) is an integral domain. An ideal \( I \subset R \) is maximal if and only if \( R/I \) is a field. Depending in the situation we may know the ideal type, and use this to say something about the quotient. In other cases, we may know what the quotient is and use this to say something about the ideal.
With one group I draw some pictures to help illustrate how quotient rings work. I discussed tutorial questions with some groups, and one of the homework questions with another group.
We didn't really get around to questions 3 or 4, which is a little unfortunate. Please come and ask me about these questions during the office hours!
- Q1i) This question was part of the warmup. With this question, we had to show that a particular quotient ring \( (\Z/5)[x] / I \), for \( I = (x^2 + x + \overline{2})_{(\Z/5)[x]} \) is a field. Rather than just appeal to a big theorem from lectures, I wanted to recall some of the steps that go into it.
- This is a quotient ring, so has the form \( R/I \). If we show the ideal \( I \) is maximal, then \( R/I \) is a field.
- Since \( (\Z/5)[x] \) is a PID, we can use Q3 (like was done in the lectures) to show \( I \) is maximal.
- We therefore just need to check that \( x^2 + x + \overline{2} \) is irreducible. (See tutorial 2, Q2)
- Alternative: One could prove directly \( F[x] / (f)_{F[x]} \) is a field, when \( f \) is irreducible, using the Euclidean algorithm.
- Use the division algorithm to find representatives with degree \( \leq \deg(f) \). If the polynomial \( f \) is irreducible, then the \( \gcd \) of this representative, and \( f \) is 1. The Euclidean algorithm will then construct an inverse.
- These ideas essentially do appear in the proof of Theorem 17.1, because knowing the Euclidean algorithm works tells us \( F[x] \) is a PID. And the ideal \( (f) \) is maximal because including anything more gives us elements with \( \gcd \) 1, which now must be in the ideal.
- Q1ii) Knowing this quotient ring is a field, we were asked to find the inverse of \( (\overline{2}x + \overline{3}) + I \).
- By eliminating \( x^2 \) and higher powers using \( x^2 + I = (-x - \overline{2}) + I \), we can suppose the inverse is \( (ax + b) + I \).
- From here we can set up, and solve a system of equations to get the inverse. This is done in the solutions on DUO.
- I wanted to show a different approach, using the Euclidean algorithm which is more systematic.
- I gave out a handout with a more complicated example which takes several steps.
- Q2) We wanted to show that the ideal \( (2,x)_{\Z[x]} \) is not principal.
- Recall that an ideal is principal if it can be written with one generator.
- This ideal has two generators, but maybe we can write it with only one somehow? Like in \( \Z \), where \( (4, 6)_{\Z} = (2)_{\Z} \); in \( \Z \) any ideal with \( \geq 2 \) generators can always be written with only one generator.
- Mimicing an example from lectures, we assume the ideal is principal, so it is \( (f(x))_{\Z[x]} \), for some \( f(x) \in \Z[x] \).
- Since it equals \( (2,x)_{\Z[x]} \), we must have \( 2 \) and \( x \in (f(x))_{\Z[x]} \), which means \( 2 = f(x)g(x) \) and \( x = f(x)h(x) \) for some \( g, h \in \Z[x] \).
- In lectures, you used the norm map to find a small list of possible generators. Here we use degrees. We see \( 2 = f(x)g(x) \) means \( f(x) \) is constant, and reduce to the cases \( f(x) = 1 \) or \( f(x) = 2 \).
- Then we check that \( x \notin (2)_{\Z[x]} \) to eliminate \( f(x) = 2 \). And check \( 1 \notin (2,x)_{\Z[x]} \) to eliminate \( f(x) = 1 \).
- Hwk7 Q1) This question caused a number of problems. I ended up discussing the ideas behind this with one group.
- First it's good to have some intuitive/informal idea of why the isomorphisms ought to work.
- For Q1ii), with \( \Z[x]/(n,x) \), we can take some polynomial \( a_0 + a_1 x + a_2 x^2 + \cdots + a_d x^d \). Since \( x \in (n,x)_{\Z[x]} \), we can kill all powers of \( x \), and reduce the polynomial to just \( a_0 \).
- Then since \( n \in (n,x) \), we can get rid of all multiples of \( n \), to reduce the polynomial to \( a_0 \pmod{n} \).
- So it's plausible that \( \Z[x]/(n,x) \cong \Z/n \).
- For Q1i), plug in a few values of \( a \) and \( b \) to see \( 2, \sqrt{2} \in I \), where \( I = \{ 2a + b\sqrt{2} \mid a,b \in \Z \} \). Then we can play the same game.
- Take \( a + b \sqrt{2} \in \Z[\sqrt{2}] \). Since \( \sqrt{2} \in I \), we can kill the \( b \sqrt{2} \), and reduce to just \( a \). Then we can get rid of multiples of 2, to reduce to \( a \pmod{2} \).
- Now a good guess is that \( \Z[\sqrt{2}]/I \cong \Z/2 \).
- To show something like this rigorously, we should always use the First Isomorphism Theorem.
- (Somehow) write down a surjective ring homomorphism \( \phi \colon \Z[\sqrt{2}] \to \Z/2 \), which has kernel \( I \).
- With a bit of experience you might see to just write down \( \phi(a + b\sqrt{2}) = \overline{a} \). But knowing \( 2, \sqrt{2} \in \ker\phi \), you can derive what map \( \phi \) must be.
- By definition of a ring homomorphism we need \( f(0) = \overline{0} \) and \( f(1) = \overline{1} \). But also \( f(a+b\sqrt{2}) = f(a) + f(b)f(\sqrt{2}) \) which equals \( f(a) \) if we have \( f(\sqrt{2}) = 0 \). Then think about what happens when \( a = 2k \) is even, and when \( a = 2k+ 1 \) is odd. This tells you everthing...
- Now carefully check the conditions for the First Isomorphism Theorem. Is \( \phi \) actually a ring homomorphism? Is \( \im\phi = \Z_2 \)? Is \( \ker\phi = I \)? Then you can conclude \( \Z[\sqrt{2}]/I \cong \Z/2 \).
Tutorial 3 - Week 7 - 24 Nov 2015 - Quotient rings and ideals
- Slides: Warmup/recap
- Question: My version of question 2
- Solution: Solution to my verion of question 2
- Proof: The inverse of an isomorphism is indeed a ring homomorphism
- Example: For polynomials over not-fields, repeated eliminating high powers of \( x \) in quotients might not work.
Here are the slides from the recap and warmup questions. I started with a recap about ideals and quotient rings. I reminded you about the definition of an ideal using the 'black-hole' conditions. (I also mentioned the fact that ideals must be non-empty, which is missing from Dr Stasinski's notes.) Then we saw how to define quotient rings. An ideal \( I \) gives a notion of equivalence in the ring \( R \). We can define operations on equivalence classes, to make the set of equivalence classes into a ring, the quotient ring.
I tried to convince you that you are already familiar with some quotient rings. I showed how elements in \( \mathbb{R}[x] / (x^2 + 1)_{\mathbb{R}[x]} \) correspond to complex numbers. The different elements in an equivalence class just corresond to different ways of writing a complex number, using \( i^2 = -1 \) to simplify expressions.
Lastly, I have a recap of ideals generated by elements. In some sense, \( (a_1, \ldots, a_n)_R \) is the smallest ideal containing \( a_1, \ldots, a_n \). This gives us a way of showing two ideals are equal just by working with their generators. We do this by showing the generators of one idea are already contained in the other, and vice-versa.
- Q1) This was perhaps the most conceptually difficult quetion. The statement of the question sounds obvious: equations in isomorphic rings behave the same way. But how to show this rigorously?
- If the rings are isomorphic, we should start by choosing some isomorphism \( f \colon R \to S \). This means \( f \) is a bijective ring homomorphism.
- If \( r \in R \) satisfies the equation \( r^2 = r \), then we can apply \( f \) to both sides of this, to translate this over to something in \( S \).
- We get \( f(r^2) = f(r) \), and since \( f \) is a homomorphism \( f(r^2) = f(r)^2 \). So \( f(r) \) satisfies the equation \( s^2 = s \) in \( S \).
- Every solution \( r \) in \( R \) gives rise to a solution \( f(r) \) in \( S \). But does every solution in \( S \) arise in this way? We need to repeat the argument going the other way with \( f^{-1} \colon S \to R \).
- For this to work \( f^{-1} \) must be a homomorphism. But this is not part of the definition of an isomorphism, so we need to prove it! Here is a proof that \( f^{-1} \) is indeed a ring homomorphism.
- Q2) It appears that my idea to break-up question 2 into 'simpler' parts backfired a bit.
- With Q2i) I wanted to force you to use a more precise approach to listing all the equivalence classes in the quotient ring.
- Intuitively you might have tried to used the fact \( (x^2 + \overline{1}) + I = 0 + I \) to replace \( x^2 \) with \( -\overline{1} \) and eliminate higher powers of \( x \). This means you can get a linear representative \( (ax + b) + I \).
- See the solution to my version of question 2, for an explanation of how the division algorithm gives this result more rigorously.
- Extra: To do this procedure of repeatedly eliminating high powers of \( x \) to get down to fixed degree representatives we need to work with polynomials over a field. Here is an example of what can go wrong over other rings, like \( \mathbb{Z} \).
- We checked the calculations in Q2ii) as part of the warmup and recap. Completing the rest of the addition and multiplication tables of the quotient ring involves a number of similar calculations.
- Q3) We covered Q3i) as part of the warmup. The rest of this question is a number of similar calculations. Check the generators of one ideal are already contained in the other ideal.
- It can require some trial-and-error to determine how to write the generators of one ideal in terms of the generators of the other.
- Try computing some small multiples of the generators, and looking at sums/differences. It might take a number of steps.
- If an ideal \( I \) has only one generator (principal ideal), try dividing the generators of \( J \) by it (as real/complex numbers) to determine what multiples they are.
Tutorial 2 - Week 5 - 10 Nov 2015 - Polynomial irreducibility
- Slides: Warmup/recap
- Handout: Eisenstein and the discriminant
- Handout: Polynomials reducible modulo every prime \( p \)
Here are the slides from the recap and warmup questions. I reminded you about how degree 2, 3 and 4 polynomials can factor. We saw degree 2 and 3 polynomials are irredubile if and only if they have no root. But degree 4 polynomials can still factor as quadratic \( \times \) quadratic. Remember to check this possibility.
I also reminded you of the Rational Root Theorem, for finding a complete list of candidates for the rational roots of a polynomial. Check by substituting in, to see if they give roots.
Lastly, I talked about the Eisenstein criterion as a powerful way of proving some polynomials are irreducible. If you see high degree polynomials, \( \geq 5 \) say, then Eisenstein is usually a good idea.
- Q1) We saw that Eisenstein can sometimes be applied to shifts of a polynomial, even if it can't be applied to the original.
- If \( f(x) = g(x)h(x) \) is reducible, then the shift \( f(x+a) = g(x+a)h(x+a) \) is also reducible. So proving \( f(x+a) \) is irreducible, proves \( f(x) \) is irreducible.
- We found that \( a = 3 \), or \( a = 10 \) gives a good shift, and \( p = 7 \) works.
- Don't be afraid to just try out a few values for \( a \). You don't need an abstract theoretical explanation for why \( a = 3 \). If \( a = 3 \) works, it works!
- Extra: Notice that \( p = 7 \) divides the discriminant \( \Delta = -7 \) of \( x^2 + x + 2 \). This is not a coincidence. I gave a handout about some general theory behind Eisenstein and the discriminant.
- Extra: Using the ideas on the handout, you can prove there are some irreducible polynomials which can't be detected at all with Eisenstein.
- Q2i,ii) I covered these as part of the warmup/recap. For 2i), the degree is 2 so no roots proves irreducibility. For 2ii), the degree is very high. Unless it has lots of rational roots, factoring it will be tedious (imagine trying to check for deg 2 \( \times \) deg 6, deg 3 \( \times \) deg 5, and deg 4 \( \times \) deg 4 by hand...). We tried something clever, like Eisenstein.
- In 2i) you can use the discriminant \( \Delta = -8 \lt 0 \) to conclude there are no rational (or even real) roots.
- Be aware of potential problems if you try this in \( \Z/p \). What numbers are squares modulo \( p \)? Or in a more general ring?
- In \( \Z/2 \) the quadratic formula breaks completely. Since \( \overline{2} = \overline{0} \) you would be dividing by 0 when you try to write \( x = \frac{1}{2a} ( -b \pm \sqrt{b^2 - 4ac} ) \).
- For example, \( x^2 + x + \overline{1} \) has no solutions in \( \Z/2 \), even though the discriminant would be \( \overline{1}^2 - \overline{4}\times\overline{1}\times\overline{1} = \overline{1}^2 \).
- Safest thing to do is to check all the potential roots.
- Q2iii) Here we had an example of a degree 4 polynomial. When tackling degree 4 polynomials, a good first step is to find roots. Either we find a root, and can take out a factor. Or we eliminate 3 of the 4 possible ways it can factor.
- This polynomial has a root. We can factor this out to get a deg 1 \( \times \) deg 3.
- We need to see if the cubic is still reducible. But for deg \( \leq 3 \), reducible is equivalent to (rational) roots.
- The cubic has no rational roots, so is irreducible. Done.
- Q2iv) Here we had another degree 4 polynomial. (Don't be put of by the hint.) This one has no roots, so we need to check if it factors as deg 2 \( \times \) deg 2.
- Suppose it factors as \( (ax^2 + bx + c)(dx^2 + ex + f) \), with \( a, \ldots, f \in \Q \). We can multiply this out to get a system of equations by comparing coefficients.
- Gauss's lemma lets us assume \( a, \ldots, f \in \Z \), rather than \( \Q \). This means we can check through the possibilities on a case-by-case basis. E.g. \( ad = 2 \) has only 4 solutions over \( \Z \), namely \( a = -2 \), \( a = -1 \), \( a = 1 \), \( a = 2 \)
- By swapping the factors, and/or multiplying both by \( -1 \), we can assume \( a = 2, d = 1 \), and get a simpler system.
- Now from \( cf = 2 \), we get 4 cases to check \( c = -2 \), \( c = -1 \), \( c = 1 \), \( c = 2 \). Three of these give inconsistent equations for \( b, e \). But \( c = 1 \), \( f = 2 \) works.
- So this polynomial factors into two irreducible quadratics.
- Q3) Here we had yet another degree 4 polynomial. This time, we're doing to use a neat trick to eliminate the deg 2 \( \times \) deg 2 factorisation.
- Firstly we're asked to factor it in \( \Z/3[x] \). On the sheet Dr Stasinski wrote \( \Z_3 \), but this is meant to be \( \Z/3 \).
- It factors as deg 1 \( \times \) deg 3 in \( \Z/3[x] \). The degree 3 factor is irreducible because it has no roots. Just check \( x = \overline{0}, \overline{1}, \overline{2} \).
- If the polynomial factored as deg 2 \( \times \) deg 2 in \( \Q[x] \), then we would get the same type of factorisation in \( \Z/3[x] \). An irreducible cubic factor could not appear.
- Therefore the polynomial cannot factor as deg 2 \( \times \) deg 2. And the remaining possibilities would all have linear factors, meaning roots.
- Extra: I gave a handout about reducing polynomials modulo \( p \). If the reduction mod \( p \) is irreducible, then the original polynomial is also irreducible.
- Extra: Some polynomials behave `weirdly' modulo primes. There are irreducible polynomials which are reducible modulo all primes. There are also polynomials which always factorise into `equal degree' factors modulo a prime. The handout looks as a specific example, and hints at the deeper reasons for this.
Tutorial 1 - Week 3 - 27 Oct 2015 - Introduction to rings
- Q1) We investigated how many solutions the quadratic equation \( x^2 = 1 \) has in a various different rings. Surprisingly we found the equation has 4 solutions in \( \Z/8 \), instead of the 2 we might expect!
- For integral domains (a 'nice' ring), we could prove that \( x^2 = 1 \) has \( \leq 2 \) solutions. The usual process of factoring (using distributivity in a ring), and deducing one factor equals 0 (using the integral domain-ness) works here.
- Extra: Later on you will see more generally that in an integral domain, a polynomial \( f(x) \) has at most \( \deg(f) \) roots.
- Q2) In the definition of an integral domain, and a field, we always assume that \( 0 \neq 1 \). We saw that \( 0 = 1 \) implies that \( R = \{ 0 \} \), so this excludes only the `zero ring'. This then means \( 0 \neq 1 \) is equivalent to the ring having \( \geq 2 \) elements.
- At first, this feels like a very obvious result, especially if you still thinking only about the usual addition and multiplication you know from \( \Z, \R, \C \), ...
- When proving something about an abstract ring, you should be careful to check your deductions only use the ring axioms given in the definition of a ring. Don't assume the operations behave like the standard multiplication, rings wih weird operations like the Boolean ring on `all subsets of a set' do exist!
- Recalling the result that \( 0\cdot a = 0 \) in a ring, as shown in the problem class, we could give a rigorous proof which only uses the ring axioms.
- Q3) We saw that \( \Z \) has no subrings, other than \( \Z \) itself.
- Since a subring contains 0 and 1, we get that 1, 1+1, 1+1+1, ..., and their inverses, are in the subring.
- Extra: The idea of repeatedly adding the multiplicative identity 1 to itself will give us a way of interpreting the usual integers as elements of any ring.
- Extra: There are subsets which are very close to being subrings, except they lack multiplicative identities. For example, \( 5\Z \). These are called ideals, and you will study them soon.
- Q4) We started to look at finite integral domains, with the goal of showing every finite integral domain is a field.
- In an integral domain, multiplication by \( r \neq 0 \) is injective because we can `cancel'. For a finite set \( X \), comparing with the size of the image shows that an injective map \( X \to X \) is surjective. So for a finite integral domain, multiplication by \( r \) must hit 1, meaning \( r \) has a multiplicaive inverse.
- Extra: If we drop commutativity from the definition of a field, we get a division ring, for example the quaternions. Dropping commutativity from an integral domain gives a domain. One can show that any finite domain, or finite division ring is a field (Wedderburn's little theorem). Since only a small number of finite fields exist (compared with all finite rings), lots of finite rings will have zero divisors.