Submitter | Scott |
---|---|
Date | 2010-09-02 05:40:50 |
Message ID | <0703EF8685AB4BB091774844213B7DF6@m3a78> |
Download | mbox | patch |
Permalink | /patch/1853/ |
State | Superseded |
Headers | show |
Comments
Dear Scott, Am Donnerstag, den 02.09.2010, 00:40 -0500 schrieb Scott: > The subversion comment for -r 5571 states: > > The AMD Fam10 code breaks with coreboot 4.5.0. > Potentially caused by reordering. Going back to 4.4.4 > which is known working on Fam10 until gcc or the Fam10 code is fixed. > > Signed-off-by: Stefan Reinauer <stepan@coresystems.de> > Acked-by: Stefan Reinauer <stepan@coresystems.de> > > > I encountered the same problem and debugged it. The AP code that disables > cache as ram before the final halt has to be all inline. Function calls > require a valid stack, and the stack is kept in the very cache as ram that > the code is disabling. I found with gcc 450, the code for rdmsr, disable_cache, > and enable_cache and not getting inlined as intended. Function calls are s/and/are/ > generated, and the first one after the AP clears msr 268 fails. The solution > is to force these functions to generate inline code by adding > __attribute__((always_inline)) to their declarations: great find!!! Could you please send a patch according to the development guidelines. Especially do not forget to add your Signed-off-by line. Just to leave no doubt, could you please add that you tested your fix using GCC 4.5.0 with the used hardware. […] Thanks, Paul
Dear Scott, Am Donnerstag, den 02.09.2010, 10:59 +0200 schrieb Paul Menzel: > Am Donnerstag, den 02.09.2010, 00:40 -0500 schrieb Scott: > > > The subversion comment for -r 5571 states: > > > > The AMD Fam10 code breaks with coreboot 4.5.0. > > Potentially caused by reordering. Going back to 4.4.4 > > which is known working on Fam10 until gcc or the Fam10 code is fixed. > > > > Signed-off-by: Stefan Reinauer <stepan@coresystems.de> > > Acked-by: Stefan Reinauer <stepan@coresystems.de> > > > > > > I encountered the same problem and debugged it. The AP code that disables > > cache as ram before the final halt has to be all inline. Function calls > > require a valid stack, and the stack is kept in the very cache as ram that > > the code is disabling. I found with gcc 450, the code for rdmsr, disable_cache, > > and enable_cache and not getting inlined as intended. Function calls are > > s/and/are/ > > > generated, and the first one after the AP clears msr 268 fails. The solution > > is to force these functions to generate inline code by adding > > __attribute__((always_inline)) to their declarations: > > great find!!! Could you please send a patch according to the development > guidelines. Especially do not forget to add your Signed-off-by line. They are documented in the Wiki [1]. > Just to leave no doubt, could you please add that you tested your fix > using GCC 4.5.0 with the used hardware. > > […] Thanks, Paul [1] http://www.coreboot.org/Development_Guidelines
Dear Scott, Am Donnerstag, den 02.09.2010, 16:06 -0500 schrieb Scott: > -----Original Message----- > From: coreboot-bounces@coreboot.org [mailto:coreboot-bounces@coreboot.org] On Behalf Of Paul Menzel > Sent: Thursday, September 02, 2010 04:11 AM > To: coreboot@coreboot.org > Subject: Re: [coreboot] [PATCH] fix 'AMD Fam10 code breaks with gcc 4.5.0' > Am Donnerstag, den 02.09.2010, 10:59 +0200 schrieb Paul Menzel: > > Am Donnerstag, den 02.09.2010, 00:40 -0500 schrieb Scott: > > > > > The subversion comment for -r 5571 states: > > > > > > The AMD Fam10 code breaks with coreboot 4.5.0. > > > Potentially caused by reordering. Going back to 4.4.4 > > > which is known working on Fam10 until gcc or the Fam10 code is fixed. > > > > > > Signed-off-by: Stefan Reinauer <stepan@coresystems.de> > > > Acked-by: Stefan Reinauer <stepan@coresystems.de> > > > > > > > > > I encountered the same problem and debugged it. The AP code that disables > > > cache as ram before the final halt has to be all inline. Function calls > > > require a valid stack, and the stack is kept in the very cache as ram that > > > the code is disabling. I found with gcc 450, the code for rdmsr, disable_cache, > > > and enable_cache and not getting inlined as intended. Function calls are > > > > s/and/are/ > > > > > generated, and the first one after the AP clears msr 268 fails. The solution > > > is to force these functions to generate inline code by adding > > > __attribute__((always_inline)) to their declarations: > > > > great find!!! Could you please send a patch according to the development > > guidelines. Especially do not forget to add your Signed-off-by line. > > They are documented in the Wiki [1]. > > > Just to leave no doubt, could you please add that you tested your fix > > using GCC 4.5.0 with the used hardware. > > > > [.] > OK, I resubmitted it. Currently tested on simnow only. I will make a > Tilapia binary and ask the AMD guys to test it. By the way, the patch > file has Windows style line endings. I wonder if others can use it easily... I guess next time this message should go to the list since it contains information interesting to the others too.
Dear Scott, Am Freitag, den 03.09.2010, 00:45 +0200 schrieb Paul Menzel: > Am Donnerstag, den 02.09.2010, 16:06 -0500 schrieb Scott: > > -----Original Message----- > > From: coreboot-bounces@coreboot.org [mailto:coreboot-bounces@coreboot.org] On Behalf Of Paul Menzel > > Sent: Thursday, September 02, 2010 04:11 AM > > To: coreboot@coreboot.org > > Subject: Re: [coreboot] [PATCH] fix 'AMD Fam10 code breaks with gcc 4.5.0' > > > Am Donnerstag, den 02.09.2010, 10:59 +0200 schrieb Paul Menzel: > > > Am Donnerstag, den 02.09.2010, 00:40 -0500 schrieb Scott: > > > > > > > The subversion comment for -r 5571 states: > > > > > > > > The AMD Fam10 code breaks with coreboot 4.5.0. > > > > Potentially caused by reordering. Going back to 4.4.4 > > > > which is known working on Fam10 until gcc or the Fam10 code is fixed. > > > > > > > > Signed-off-by: Stefan Reinauer <stepan@coresystems.de> > > > > Acked-by: Stefan Reinauer <stepan@coresystems.de> > > > > > > > > > > > > I encountered the same problem and debugged it. The AP code that disables > > > > cache as ram before the final halt has to be all inline. Function calls > > > > require a valid stack, and the stack is kept in the very cache as ram that > > > > the code is disabling. I found with gcc 450, the code for rdmsr, disable_cache, > > > > and enable_cache and not getting inlined as intended. Function calls are > > > > > > s/and/are/ > > > > > > > generated, and the first one after the AP clears msr 268 fails. The solution > > > > is to force these functions to generate inline code by adding > > > > __attribute__((always_inline)) to their declarations: > > > > > > great find!!! Could you please send a patch according to the development > > > guidelines. Especially do not forget to add your Signed-off-by line. > > > > They are documented in the Wiki [1]. > > > > > Just to leave no doubt, could you please add that you tested your fix > > > using GCC 4.5.0 with the used hardware. > > > > > > [.] > > > OK, I resubmitted it. Currently tested on simnow only. I will make a > > Tilapia binary and ask the AMD guys to test it. By the way, the patch > > file has Windows style line endings. I wonder if others can use it easily... > > I guess next time this message should go to the list since it contains > information interesting to the others too. [I hit »Sent« too early. I am sorry!] … therefore I am replying to the list again. I hope other will have time to test this. We will see if the developers have a problem when committing your patch. If yes, I guess Patrick can give you a hint on how to circumvent this problem. Thanks, Paul PS: Do you know if you can configure Outlook to use a style when replying similar to everyone else is using on this list, i. e. use a line »Joe Doe wrote on …« and afterward quoting everything by putting »>« in front of every line?
Patch
Index: src/include/cpu/x86/msr.h =================================================================== --- src/include/cpu/x86/msr.h (revision 5763) +++ src/include/cpu/x86/msr.h (working copy) @@ -29,7 +29,7 @@ msr_t msr; } msrinit_t; -static inline msr_t rdmsr(unsigned index) +static inline __attribute__((always_inline)) msr_t rdmsr(unsigned index) { msr_t result; __asm__ __volatile__ ( @@ -40,7 +40,7 @@ return result; } -static inline void wrmsr(unsigned index, msr_t msr) +static inline __attribute__((always_inline)) void wrmsr(unsigned index, msr_t msr) { __asm__ __volatile__ ( "wrmsr" Index: src/include/cpu/x86/cache.h =================================================================== --- src/include/cpu/x86/cache.h (revision 5763) +++ src/include/cpu/x86/cache.h (working copy) @@ -74,7 +74,7 @@ asm volatile("invd" ::: "memory"); } -static inline void enable_cache(void) +static inline __attribute__((always_inline)) void enable_cache(void) { unsigned long cr0; cr0 = read_cr0(); @@ -82,7 +82,7 @@ write_cr0(cr0); } -static inline void disable_cache(void) +static inline __attribute__((always_inline)) void disable_cache(void) { /* Disable and write back the cache */ unsigned long cr0;